Call for papers

Feedback Computing is a unique forum built around advancing feedback system theory and practice in modeling, analyzing, designing, and optimizing computing systems with respect to performance, predictability, power consumption and thermal aspects. Computing system includes everything from high-performance grids, cloud and web service infrastructures, distributed mobile systems, servers, SOCs, embedded systems, and sensor networks. The workshop represents the growing use of feedback in a broader agenda and is a timely response to the following two trends:
1. Computing systems are growing larger, smarter, and more complex, embedding in the physical world, human interactions, and societal infrastructure. Systematic and feedback-driven approaches are critical for addressing the dynamic complexity that arises in new fields such as cyber-physical systems, cloud computing, social networks, and mobile applications.
2. Advances in disciplines such as machine learning, mathemati­cal optimization, network theories, decision theories, and data engineering provide new foundations and techniques that empower feedback approaches to address computing systems at scale and to achieve goals such as autonomy, adaptation, stabilization, robustness, and performance optimization.

Topics

The Feedback Computing Workshop seeks original research contribu­tions and position papers on advancing feedback control technolo­gies and their applications in computing systems, broadly defined. Topics of interest include but are not limited to:
  • Theoretical foundations for feedback computing
  • New control paradigms and system architecture
  • Sensing, actuation, and data management in feedback computing
  • Learning and modeling of computing system dynamics
  • Design patterns and software engineering
  • Experiences and best practices from real systems
  • Applications in domains such as big data, cloud comput­ing, computer networks, cyber-physical systems, data center resource management, distributed systems, mobility, power management and sustainability, real-time systems, and social networks
We solicit research papers containing original research results, challenge papers motivating new research directions, and applica­tion papers describing experiences from real systems. In addition, the workshop will facilitate discussion and collaborative research among the participants. One Best Paper Award will be announced at the end of workshop to recognize the current best work in feedback computing.

Committee

General Chair:

  • Bhuvan Urgaonkar, The Pennsylvania State University

Program Co-Chairs:

  • Karl-Erik Arzen, Lund University
  • Xue Liu, McGill University

Steering Committee:

  • Tarek Abdelzaher, University of Illinois at Urbana-Champaign
  • Yixin Diao, IBM T.J. Watson Research Center
  • Joseph L. Hellerstein, University of Washington
  • Chenyang Lu, Washington University in St. Louis
  • Anders Robertsson, Lund University
  • Xiaoyun Zhu, VMware

TPC Members:

  • Luca Benini, ETH Zurich
  • Sameh Elnikety, Microsoft Research
  • Yuxiong He, Microsoft Research
  • Martina Maggio, Lund University
  • Nicolas Marchand, Grenoble INP
  • Arif Merchant, Google
  • Luigi Palopoli, University of Trento
  • Eduardo Tovar, Ploytechnic Institute of Porto
  • Ming Zhao, Florida International University
  • Shaolei Ren, Florida International University


Important dates

  • Paper submissions due: February 1, 2015February 10, 2015 February 18, 2015
  • Notification to authors: March 8, 2015
  • Final paper files due: March 20, 2015


Submission

Please use the EasyChair submission system to submit a paper.

All submissions should be formatted according to the standard ACM two-column proceeding guidelines. Manuscript templates are available for download at http://www.acm.org/sigs/publications/proceedings-templates.
The workshop follows a single-blind review process. Authors are invited to submit three types of papers to emphasize the multiple foci of this workshop:
  • Research Papers: Research papers must represent original, unpublished contributions and must not exceed 6 pages in length (excluding references).
  • Challenge Papers: Challenge paper submissions must motivate research challenges with real systems that can take advantage of feedback computing, and should not exceed 3 pages in length (excluding references).
  • Application Papers: Application paper submissions must be based on real experience and working systems. All submissions should be formatted as annotated slides—a visual in the upper half of a page and the explanatory text in the lower half—and should not exceed 15 slides in length.


List of Accepted Papers

  • Design and Performance Guarantees in Cloud Computing: Challenges and Opportunities

    Author: Alessandro Vittorio Papadopoulos (Lund University)

    Abstract: In the last years, cloud computing received an increasing attention both from academia and industry. Most of the solutions proposed in the literature strive to limit the effect of uncertain and unpredictable behaviors that may occur in cloud environments, like for example flash crowds or hard- ware failures. However, managing uncertainty in a cloud environment is still an open problem. In such a panorama, the service provider is not able to define suitable Service Level Objectives (SLO) that are easy to measure, and con- trol. In this work we analyze two of the critical problems that are encountered in cloud environments, but seldom dis- cussed or addressed in the literature: (1) how to reduce the uncertainty providing suitable control interfaces at different levels of the computing infrastructure; (2) how to assess per- formance evaluation in order to get probabilistic guarantees for the SLOs. We here briefly describe the two problems and envision some possible control-theoretical solutions.

  • MIRRA: Rule-Based Resource Management for Heterogeneous Real-Time Applications Running in Cloud Computing Infrastructures

    Authors: Yong woon Ahn, Albert Mo Kim Cheng (University of Houston)

    Abstract: Real-time software and hardware applications are attracting more attentions from many different areas of industry and academia due to exponentially growing markets of Cyber Physical System (CPS) and Internet of Things (IoT) devices. In order to satisfy high scalability requirements of data processing, storage, and network bandwidth from these applications, using cloud computing technologies has become one of the most cost-efficient and practical options. However, currently serviced public cloud computing technologies are originally designed for best-effort applications such as web services, and the cloud vendors’ service level agreements (SLAs) do not provide any application level Quality of Service (QoS) guarantees. In this paper, we propose a novel middleware platform, MIRRA, running between existing cloud computing technologies and real-time application servers. MIRRA provides multiple software layers to schedule real-time tasks by automatically scaling up and down virtual resources using a knowledge base with various rules, and its internal architecture consists of multiple subcomponents based on the autonomic computing architecture principles to implement the self-resource- adjustment design.

  • Resource Management Challenges for the Infinite Cloud

    Authors: William Tärneberg (Lund University), Amardeep Mehta, Johan Tordsson (Umea University), Maria Kihl (Lund University), Erik Elmroth (Umea University)

    Abstract: Cloud applications are growing more and more complex be- cause of user mobility, hardware heterogeneity, and multi- component nature. Today’s cloud infrastructure paradigm, based on distant data centers are not able to provide con- sistent performance and low enough communication latency for future applications. These discrepancies can be accom- modated using existing large-scale distributed cloud infras- tructure, also known as Infinite Cloud, which is amalgam of several Data Centres hosted by a Telecom Network. The In- finite Cloud provides opportunity for applications with high capacity, high availability, and low latency requirements. The Infinite Cloud and federated cloud paradigms intro- duce several challenges due to the heterogeneous nature of the resources of different scale, latencies due to geographi- cal locations and dynamic workload, to better accommodate distributed applications with increased diversity. Managing a vast heterogeneous infrastructure of this nature can not be done manually. Autonomous, distributed, collaborative, and self-configuring systems need to be developed to manage the resources of the Infinite Cloud in order to meet appli- cation Service Level Agreements (SLAs), and the operators’ internal management objectives. In this paper, we discuss some of the associated research challenges for such a sys- tem by formulating an optimization problem based on its constituent cost models. The decision maker takes into ac- count the computational complexity as well as stability of the optimal solution.

  • Model-Based Deadtime Compensation of Virtual Machine Startup Times

    Authors: Manfred Dellkrantz, Jonas Dürango, Anders Robertsson, Maria Kihl (Lund University)

    Abstract: Scaling the amount of resources allocated to an application according to the actual load is a challenging problem in cloud computing. The emergence of autoscaling techniques allows for autonomous decisions to be taken when to acquire or re- lease resources. The actuation of these decisions is however affected by time delays. Therefore, it becomes critical for the autoscaler to account for this phenomenon, in order to avoid over- or under-provisioning. This paper presents a delay-compensator inspired by the Smith predictor. The compensator allows one to close a sim- ple feedback loop around a cloud application with a large, time-varying delay, preserving the stability of the controlled system. It also makes it possible for the closed-loop sys- tem to converge to a steady-state, even in presence of re- source quantization. The presented approach is compared to a threshold-based controller with a cooldown period, that is typically adopted in industrial applications.

  • Optimal Process Control of Symbolic Transfer Functions

    Authors: C. Griffin and E. Paulson (Penn State University)

    Abstract: Transfer function modeling is a standard technique in clas- sical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classical (r, s, k) transfer functions. Computing systems are often fundamentally discrete and feedback control in these situations may require discrete event systems for modeling control structures and process flow. In these situations, a discrete transfer function in the form of an accurate hidden Markov model of input/output relations can be used to derive optimally responding con- trollers. In this paper, we extend work begun by the authors in identifying symbolic transfer functions for discrete event dy- namic systems (Griffin et al. Determining A Purely Sym- bolic Transfer Function from Symbol Streams: Theory and Algorithms. In Proc. 2008 American Control Conference, pgs. 1166-1171, Seattle, WA, June 11-13, 2008). We assume an underlying input/output system that is purely symbolic and stochastic. We show how to use algorithms for esti- mating a symbolic transfer function and then use a Markov Decision Processes representation to find an optimal sym- bolic control function for the symbolic system.

  • Self-adaptation Challenges for Cloud-based Applications: A Control Theoretic Perspective

    Authors: Soodeh Farokhi (Vienna University of Technology), Pooyan Jamshidi (Imperial College London), Erik Elmroth (Umea University)

    Abstract: Software applications accessible over the web face dynamic and unpredictable workloads. Since cloud elasticity provides the ability to adjust the deployed environment on the fly, modern applications tend to target the cloud as a fertile de- ployment environment. However, relying only on native elas- ticity features of cloud service providers is not sufficient for modern applications. This is because current features rely on users’ knowledge for configuring the performance sensi- tive parameters of the elasticity mechanism and in general users cannot optimally determine such parameters. In order to overcome this user dependency, using approaches from autonomic computing seems appropriate. Control theory proposes a systematic way to design feedback control loops, it is a promising approach to handle unpredictable changes at runtime for software applications. Although there are still substantial challenges to effectively utilize feedback con- trol in self-adaptation of software systems, software engi- neering and control theory communities have made recent progress to consolidate their differences by identifying chal- lenges that can be addressed cooperatively. This paper is in the same vein but aims to highlight the challenges in the self-adaptation process of cloud-based applications in the perspective of control engineers. Addressing these challenges can potentially determine important research directions tar- geted for both cloud and control engineering communities.

  • A Quantitative Evaluation of the RAPL Power Control System

    Authors: Huazhe Zhang and Henry Hoffmann (University of Chicago)

    Abstract: We evaluate Intel’s RAPL power control system, which allows users to set a power limit and then tunes processor behavior to re- spect that limit. We evaluate RAPL by setting power limits and running a number of standard benchmarks. We quantify RAPL along five metrics: stability, accuracy, settling time, overshoot, and efficiency. The first four are standard measures for evaluating con- trol systems. The last recognizes that any power control approach should deliver the highest possible performance achievable within the power limit. Our results show that RAPL performs well on the four standard metrics, but some benchmarks fail to achieve maxi- mum performance. At high power limits, the average performance is within 90% of optimal. At middle power limits, it is 86% of op- timal. At low power limits, the average performance is less than 65% of optimal.



Program (Monday, April 13, 2015)

8:45-10:00

  • Introduction
  • Keynote (9:00-10:00): George Kesidis, Demand Response to Prices in Access Networks

10:30-12:00

  • Huazhe Zhang and Henry Hoffmann. A Quantitative Evaluation of the RAPL Power Control System
  • Christopher Griffin and Elisabeth Paulson. Optimal Process Control of Symbolic Transfer Function
  • Soodeh Farokhi, Pooyan Jamshidi, Ivona Brandic and Erik Elmroth. Self-adaptation Challenges for Cloud-based Applications: A Control Theoretic Perspective

13:00-14:30

  • Keynote (13:00-14:00): Simon Tuffs, Feedback Computing: Challenges and Opportunities in Cloud Architectures
  • Manfred Dellkrantz, Jonas Dürango, Anders Robertsson and Maria Kihl. Model-Based Deadtime Compensation of Virtual Machine Startup Times

15:00-16:30

  • Alessandro Vittorio Papadopoulos. Design and Performance Guarantees in Cloud Computing: Challenges and Opportunities
  • Yong Woon Ahn and Albert Cheng. MIRRA: Rule-Based Resource Management for Heterogeneous Real-Time Applications Running in Cloud Computing Infrastructures
  • William Tärneberg, Amardeep Mehta, Johan Tordsson, Maria Kihl and Erik Elmroth. Resource Management Challenges for the Infinite Cloud


Keynotes (Monday, April 13, 2015)

Demand Response to Prices in Access Networks

PDF
In the past ten years, many networking researchers have studied economic issues associated with public commodity Internet access. Evolving network neutrality regulations apparently aim to promote competition and reduce prices. To understand such economic issues, elementary models of consumer sensitivities to prices and congestion are often employed (i.e., consumer feedback). Moreover, the network-marketplace participants themselves must continually assess the impact of prices on demand and competition for the services/content they offer. In this talk, we describe two simple examples. One is used to illustrate an issue related to network neutrality: competition between an "eyeball" ISP's managed services and a content provider remote to the ISP's subscribers. The other involves a cellular wireless network with competing access providers, including a smaller entrant with limited coverage whose subscribers significantly roam. In both cases, instead of very complex and debatable numerical studies, such simple tractable models can be used to illustrate the trade-offs in the decisions facing regulators.
Bio: Prof. George Kesidis received his M.S. and Ph.D. in EECS from U.C. Berkeley in 1990 and 1992, respectively. He was a professor in the E&CE Dept of the University of Waterloo, Canada, from 1992 to 2000. Since 2000, he has been a professor of CSE and EE at the Pennsylvania State University. His research interests includes several areas of computer/communication networking and security, performance evaluation, energy-efficient operations, optimization and machine learning. His research has been primarily supported by NSERC of Canada grants, NSF grants, and Cisco Systems URP gifts. He has written three short books and several articles. He served as the TPC co-chair of IEEE INFOCOM 2007, and for other networking and security conferences, and on the editorial boards of the Computer Networks Journal, ACM TOMACS and IEEE Journal on Communications Surveys and Tutorials. In 2012 and 2013 he served as an "intermittent expert" for the National Science Foundation's Secure and Trustworthy Cyberspace (SaTC) program. His home page is http://www.cse.psu.edu/~kesidis.

Feedback Computing: Challenges and Opportunities in Cloud Architectures

PDF Google Docs Code
As computing has moved to the cloud, the tradeoff between performance, availability and cost has become increasingly complicated. Distributed cloud based architectures are complex, nonlinear, stochastic and time-varying, and are often inherently unstable under heavy load. Companies that move to the cloud need to maximize performance subject to constraints on availability and cost. This paper surveys some of the challenges involved in designing and operating micro-service cloud systems, and explores constraints to their stability and efficiency. These constraints will be illustrated with examples drawn from Netflix and Life360.
Bio: Dr. Simon Tuffs began his career in computing by researching self-tuning control systems at the University of Oxford. He then moved into the software engineering industry and has held a wide range of positions developing and deploying commercial systems. His current role is leveraging analytics and feedback to high-performance location services in the cloud.


Registration

We are co-located with CPSWeek this year. Please see detailed registration information on the CPSWeek page.



Previous events