Transactional memory is a prominent synchronization mechanism for shared memory multiprocessors. Access to critical sections of programs is accomplished with atomic transactions of memory accesses. The transactions are inherently deadlock-free and alleviate drawbacks of classic lock-based synchronization (i.e. deadlocks). Transactional memory has been implemented in modern CPUs and programming languages. In this talk, we consider the distributed shared memory setting where processor nodes are connected through a communication network. We consider the data flow model in which the transactions access shared memory objects that move from node to node through paths in the network. We study transaction execution scheduling problems which determine the movements of the shared objects so that transactions execute when their requested objects are fetched.
We present scheduling algorithms with efficient execution time and communication cost. First, we observe that there are hard problem instances where the execution time and communication cost cannot be minimized simultaneously. We then provide efficient schedules for the execution time in specialized graphs that are useful in practical scenarios such as: Clique, Line, Grid, Cluster, Hypercube, Butterfly, and Star. In most of these cases, when individual transactions request k objects, we obtain solutions that approximate the execution time within an O(k) factor from optimal, yielding near-optimal solutions for constant k. The communication cost is also low since the objects follow near-optimal paths in the network. We discuss how to adapt the offline performance analysis to the online (dynamic) execution setting where transactions are generated continuously over time.
Costas Busch obtained a B.Sc. degree in 1992 and an M.Sc. degree in 1995 both in computer science from the University of Crete, Greece. He received a Ph.D. degree in computer science from Brown University in 2000. He is currently a professor at the School of Computer and Cyber Sciences at Augusta University. His research interests are in the areas of distributed algorithms and data structures, design and analysis of communication algorithms, algorithmic game theory, and blockchains. He has publications in several prominent publication venues in computer science. His research has been supported by the National Science Foundation.
Large Scale Real-time Distributed Systems
Resource Allocation and Scheduling Issues
Department of Informatics
Aristotle University of Thessaloniki, Greece
Due to the advances in networks and computing systems, many aspects of our daily life depend on distributed interconnected computing resources. Large scale distributed systems offer computational services to scientists, consumers and enterprises. Efficient management of distributed resources is crucial to use effectively the power of these systems and achieve good performance. Large scale distributed systems are usually real-time as they are used for serving applications which require real-time processing. It is essential that appropriate resource allocation and scheduling techniques are utilized ensuring timeliness. Cloud computing, as a large scale distributed computing paradigm based on a pay-as-you-go pricing model, has been extensively used for the deployment of complex computationally intensive applications. Particularly important in cloud computing is to run delay-sensitive applications. This can be achieved due to cloud’s high-performance computing capabilities for real-time execution. However, approaches to resolve other issues such as cost and energy conservation are necessary. The last years there is an expansion of the Internet of Things (IoT). There is a plethora of IoT applications which generate huge amounts of data and it is important to process these data in real-time and provide fast decisions. As a result, fog computing has been emerged as a computing model which extends the cloud to the edge of the network, thus reducing the latency of IoT data transmission. The computational capacity of fog resources is usually limited, therefore it is necessary to employ algorithms that involve the collaboration between the cloud and fog servers. Consequently, appropriate scheduling of time-sensitive applications is required to exploit the capacity of cloud and fog computing so that the deadlines are met. In this keynote we will present and discuss various aspects of large scale real-time distributed systems, from the perspective of resource allocation and scheduling and we will conclude with future directions in this research area.
Helen Karatza is a Professor Emeritus in the Department of Informatics at the Aristotle University of Thessaloniki, Greece. Dr. Karatza’s research interests include Cloud and Fog Computing, Energy Efficiency, Resource Allocation and Scheduling and Real-time Distributed Systems. Dr. Karatza has authored or co-authored over 230 technical papers and book chapters including five papers that earned best paper awards at international conferences. She is senior member of IEEE, ACM and SCS, and she served as an elected member of the Board of Directors at Large of the Society for Modeling and Simulation International. She served as Chair and Keynote Speaker in International Conferences. Dr. Karatza is the Editor-in-Chief of the Elsevier Journal “Simulation Modeling Practice and Theory”. She was Editor-in-Chief of “Simulation Transactions of The Society for Modeling and Simulation International”, Associate Editor of “ACM Transactions on Modeling and Computer Simulation” and Senior Associate Editor of the “Journal of Systems and Software” of Elsevier. She served as Guest Editor of Special Issues in International Journals. More info about her activities/publications can be found in http://agent.csd.auth.gr/~karatza/
Fast developments of information technologies with rapidly emerging new disciplines and their application in all segments of life and society permanently urge for the development of foundational aspects based on solid theoretical concepts. Early work on the foundations of mathematics, George Cantor’s Naïve set theory was fostering paradoxes which were singled out by Bertrand Russell: Does the set of sets not containing themself contain itself? Or not? At the beginning of the XX century, Type theory was a way out of these paradoxes preventing the expression “set does not contain itself” to be a valid predicate anymore, since sets and their elements belong to different types. To this end, type theory has provided trustworthiness to logic and foundations of mathematics.
The Curry-Howard correspondence is a deep result that connects logic and computation wherein mathematical proofs coincide with computer programs and formulae with types. With this respect, types have gained an important role in the analysis of formal systems. A type system splits elements (terms, programs) of a language, into sets, called types, and proves absence of certain undesired behaviors. In programming languages, types represent a well-established technique to ensure program correctness. Accordingly, types have provided trustworthiness to foundations of programming languages. Nowadays, there is a plethora of type systems in logic, programming languages, distributed and large scale systems which establish trustworthiness of the particular framework.
In this talk, we present some significant type systems – functional types, behavioural types, linked data types – along with their properties such as type safety, liveness, deadlock freedom and discuss their role for ensuring trustworthiness.
Silvia Ghilezan is a Professor of Mathematics at the University of Novi Sad and Mathematical Institute SANU. She has held visiting positions at University of Oregon, École Normale Supérieure de Lyon, Universite Paris 7, University of Turin, Radboud University and McGill University. The major lines of her research are in mathematical logic with application to programming languages, concurrency and mathematical linguistics. Her current research interests include formal methods for new challenges in privacy protection and artificial intelligence. She has initiated, assembled consortia and project managed several successfully completed projects under national and international programs (H2020, FP, COST, Erasmus+, Tempus, bilateral). Her great many research articles with over sixty co-authors are published in leading scientific journals and conferences. Dr. Ghilezan acts as Chair, PC member and Invited speaker at prestigious international conferences worldwide. She has supervised and influenced a good number of students and researchers. She was knighted in the Chevalier de l’Ordre des Palmes Académiques of the French Republic.
I used to be enthusiastic about software. I liked clever, modular architectures; design patterns made for extensibility; virtualized, multi-application runtimes; seamless software updates; reusability in the harsh hardware-accelerated environments. And then, I was summoned to the automotive arena, which was in a desperate need for a centralized processing, artificial intelligence algorithms, service-oriented architectures and a fat software stack for the next-generation vehicles. It seemed like a dream job… at first. Only until I realized that to keep a promise I now need to adhere to the harsh world of reliability, safety and processes. This is a story of my software stack and me travelling, unscathed, following the challenging functional safety and reliability trail.
Prof. Milan Bjelica is a seasoned R&D lead and a software architect who participated in complex projects in consumer electronics and automotive which involved complex integration and multi-layered software middlewares resulting in dozens of commercial end-user products. He is also an assistant professor at the Computer Engineering Department, University of Novi Sad, and a brand new Functional Safety Expert in Automotive engineering.