CITI Seminar of Christine Solnon (INSA-Lyon-LIRIS) on September 26 at 2pm

Title: Time-Dependent and Stochastic Vehicle Routing Problems

Date and Place: 26 / 09 / 2019 14:00 in TD-C

Host: Florent de Dinechin

Abstract:
Smart cities are equipped with sensors which monitor traffic speed. The exploitation of these data to optimise urban deliveries has given rise to new challenging problems, and I’ll focus on two of them: – Time-Dependent Vehicle Routing Problems, which take into account variations of travel speeds during the day; – Stochastic Vehicle Routing Problems, where uncertain data are represented by random variables.

Biography:
Christine Solnon is Professor in the Computer Science Department of INSA Lyon, and member of the LIRIS lab.


PhD Defence: “Contributions Théoriques sur les Communications Furtives”, David KIBLOFF, Chappe Amphitheater, CITI, 17th of September 2019 at 14h00

Title

Information Theoretic Contributions to Covert Communications

Abstract

The problem of covert communications, also known as communications with low-probability of detection has gained interest in the information theory community in the last years. Since Bash et. al. showed in 2012 that the square-root law applied in the point-to-point case for such communications systems, the number of contributions on the topic did not cease to grow. In this thesis, two new problems of covert communications are introduced. First, the problem of covert communications over a point-to-point link where a warden observes only a fraction of channel outputs in order to try to detect the communications is studied. An achievability bound in the finite block-length regime is derived for this problem. Second, the problem of embedding covert information into a given broadcast code is introduced. Given a broadcast code to transmit a common message to two receivers, the goal is to determine the maximum number of information bits that can be reliably sent to one receiver while remaining covert with respect to the other receiver. For this problem, both an achievability and converse bound in the asymptotic block-length regime are derived for a particular class of channels, i.e., symmetric channels. Together these bounds characterize the maximum number of information bits that can be covertly embedded in a given broadcast code for symmetric channels.

 

Jury

  • Dr. Albert Guillen i Fabregas, Université Pompeu Fabra, Espagne. Rapporteur.
  • Dr. Aline Roumy, INRIA, France. Rapporteure.
  • Dr. Laurent Clavier, IMT Lille Douai, France. Examinateur.
  • Dr. Inbar Fijalkow, Université de Cergy-Pontoise, France. Examinatrice.
  • Dr. Jean-Marie Gorce, INSA de Lyon, France. Examinateur.
  • Dr. Ligong Wang, CNRS, France. Examinateur.
  • Dr. Guillaume Villemaud, INSA de Lyon, France. Directeur de thèse.
  • Dr. Samir M. Perlaza, INRIA, France. Encadrant de thèse.
  • Dr. Ronan Cosquer, DGA, France. Invité.

Save the date: Atelier “algorithmes en boite-noire” – October 10, Lyon

Atelier “algorithmes en boite-noire”
Etat des lieux et réaction face à une exposition massive aux décisions algorithmiques.
Le 10 Octobre 2019 à Lyon

http://atelier-blackbox.conf.citi-lab.fr

Les technologies de l’information et de la communication transforment la
société. Les algorithmes sont au coeur de toutes les attentions: objets
industriels fondamentaux pour les entreprises, ils sont souvent perçus comme
des “boites-noires” par les utilisateurs confrontés à leurs décisions. Ce
constat ne va qu’en s’amplifiant avec le déploiement continu de solutions à
base d’apprentissage machine, dont les dernières avancées techniques
(réseaux de neurones profonds) fournissent des décisions qui sont
inexplicables par construction.

L’objectif de cet atelier est de questionner ces boites-noires, leur nature,
leur conception, et plus généralement leur impact sur leurs usagers et sur la
société. La visée pluridisciplinaire et transversale de l’atelier cherchera
par exemple à aborder des sujets tels que:

– Les biais de traitement des algorithmes, et les moyens de quantifier ce biais?
– L’implémentabilité de la notion de bien commun ?
– Quels outils et quelles métriques pour un audit citoyen des algorithmes ?
– Quelle responsabilité pour les concepteurs d’algorithmes opaques ?
– Les moyens pour l’utilisateur d’observer et d’appréhender les résultats des
algorithmes en boite-noire pour construire une compréhension.
– L’évolution de la société face au nombre grandissant de boites-noires à
visées décisionnelles automatiques, et le manque avéré d’explications des
décisions.
– Quelles notions d’éthique dans le développement courant de ces
boites-noires? Sont elles suffisantes et satisfaisantes ?
– Le traitement des données personnelles des utilisateurs, la notion de
consentement et celle de la compréhension. Comment exercer un contrôle
critique sur le traitement de nos données critiques ?

Intervenants (confirmés):
* Dominique CARDON (sociologue, directeur du Médialab de Sciences Po)
* Loup CELLARD (designer, doctorant au Centre for Interdisciplinary
Methodologies de l’University of Warwick, UK)
* Claude KIRCHNER (directeur de recherches émérite d’Inria, membre de
plusieurs comités liés à l’éthique)
* Claire MATHIEU (directrice de recherches CNRS, spécialiste des algorithmes
et chargée de mission Parcoursup)
* Antoinette ROUVROY (Juriste et philosophe du droit, professeure à
l’université de Namur et chercheuse au FNRS, Belgique)
* Félix TREGUER (membre de la Quadrature du Net et chercheur au Centre
Internet et Société du CNRS et au CERI de Sciences Po)

Le programme courant est disponible ici :
http://atelier-blackbox.conf.citi-lab.fr

Localisation :  Les Halles du Faubourg – 10 Impasse des Chalets, 69007 Lyon

Inscription (gratuite) : http://atelier-blackbox.conf.citi-lab.fr/inscription/


CITI Seminar of Julie Dumas (Université Grenoble Alpes) on May 21 at 11am

Title: Cohérence de caches dans les architectures manycores et simulation

Date and Place: 21 / 05 / 2019 11:00 in TD-C

Host: Guillaume Salagnac

Abstract:

Les besoins en calcul toujours plus important ainsi que la prise en compte de l’efficacité énergétique ont conduit au développement des architectures manycores dont les travaux de recherche sont nombreux et en particulier autour du modèle mémoire. Si nous considérons les machines à mémoire partagée, le passage à l’échelle des protocoles de cohérence de caches est un problème encore ouvert. En effet, les protocoles basés sur l’espionnage, qui doivent transmettre à tous les caches les informations de cohérence, engendrent un nombre important de messages dont peu sont effectivement utiles. En revanche, les protocoles avec répertoires visent à n’envoyer des messages qu’aux caches qui en ont besoin. Dans ce cas, lorsque l’on augmente le nombre de cœurs, la taille du répertoire augmente en largeur et en profondeur et peut même dépasser la taille des données présentent dans les caches. Pour passer à l’échelle, un protocole doit émettre un nombre raisonnable de messages de cohérence et limiter le matériel utilisé pour la cohérence et en particulier pour la mémorisation du répertoire. Dans cette présentation, nous parlerons de DCC (Dynamic Coherent Cluster), une représentation dynamique de la liste des copies pour la cohérence de caches. Dans un second temps, nous nous intéresserons à la simulation de ces architectures qui freinent leur développement. En effet, plus un simulateur est précis plus les temps de simulation sont importants. Une des raisons est que la grande majorité des simulateurs ne s’exécutent pas en parallèle, ainsi une simulation de N cœurs est faite sur un seul cœur physique. Afin d’évaluer rapidement une architecture, nous proposerons un modèle de cache à haut niveau d’abstraction dans lequel des traces provenant d’un simulateur précis (gem5) sont injectées. DCC et d’autres représentations de la liste des copies seront évaluées à l’aide de cette méthodologie.

Biography:
Julie Dumas a obtenu sa thèse de doctorat en informatique à l’Université Grenoble Alpes en 2017. Ses travaux de recherche s’articulent autour de l’architecture des ordinateurs et plus particulièrement à la gestion de la mémoire dans les architectures de type manycores ainsi qu’aux techniques de simulation de ces dernières.


CITI Seminar of Thomas Begin (LIP, UCBL Lyon 1) on April 2 at 11am

Title: Contributions to the Performance Modeling of Computer Networks
Date and Place: 02 / 04 / 2019 11:00 in TD-C
Host: Jean-Marie Gorce and Florent de Dinechin
Abstract:
In this talk, I will present some of my contributions to the fields of performance evaluation and computer networks. I will first discuss a new modeling framework to evaluate the performance of DPDK-based virtual switches in the context of NFV (Network Function Virtualization) networks. Then, I’ll describe a scalable stochastic model to accurately forecast the performance of an IEEE 802.11-based network. Finally, I will introduce an original reduced-state description for multiserver queues that breaks the combinatorial complexity inherent to the classical state description and that can easily handle examples with hundreds of servers.

 

Biography:
Thomas Begin received his Ph.D. degree in Computer Science from UPMC (U. Paris 6) 2008. He was a post-doctoral fellow at UC Santa Cruz in 2009. Since 2009, he is an Associate Professor at UCBL (U. Lyon 1) in the Computer Science department. During the 2015-2016 academic year, he was on research leave at DIVA lab – University of Ottawa. T. Begin research interests are in performance evaluation, future network architecture, and system modeling. His principal applications pertain to high-level modeling, wireless networks, resource allocation and queueing systems.

CITI Seminar of Eddy Caron (LIP, École Normale Supérieure de Lyon) on March 19 at 11am

Title: Once upon a time … the deployment
Date and Place: 19/03/ 2019 11:00 in TD-C
Host: Jean-Marie Gorce and Florent de Dinechin
Abstract:
In large distributed systems the resource managements is one key of the efficient. And the deployment of the elements on resources are hidden everywhere, across the network, across the virtualization, across many infrastructures, etc. Through 6 stories we will discover many points of view of the deployment. First adventure, we will see how to deploy a middleware with self-stabilization skill. In the second story, be afraid, we will see how to deploy a secure Cloud Infrastructure. In the following story, we will introduce a deployment tool for reproducibility. The licenses deployment is another weird story with a lot of mysteries. An unbelievable story to deploy a data-driven microservices infrastructure. And finally, we will try to clear up the Fog deployment.

 

Biography:
Eddy Caron is an Associate Professor at Ecole Normale Supérieure de Lyon and holds a position with the LIP laboratory (ENS Lyon, France). He is a member of AVALON project from INRIA and Technical Manager for the DIET software package. He received his PhD in C.S. from University de Picardie Jules Verne in 2000 and his HDR (Habilitation à Diriger les Recherches) from the Ecole Normale Supérieure de Lyon in 2010. His research focuses on distributed computing environment, from P2P to Grid, Cloud and Edge computing. At the middleware level, he deals with a large scope of subjects (scheduling, workflow management, data management, energy management, security, software management, etc.)  with the same point of view of the resource magement in heterogeneous environments.
He is involved in many program committees (as HCW, IPDPS, ISPA, CloudTech, etc.). Since 2000, he contributed to more than 30 articles in journal or book chapter and more than 80 publications in international conferences. He was co-chair of the GridRPC working group in OGF. He was coordinator of two french ANR project (LEGO and SPADES). He was workpackage leader in the European project Seed4C around the security. He is the supervisor of 15 Phd (4 in progress). He teaches Distributed system, Architecture Operating System and Network, Grid and Cloud, etc. Moreover he was the Co-funder and Scientific Consultant of a company (SysFera). Deputy Director in charge of call for projects, research transfert and international affairs for the LIP Laboratory.  See http://graal.ens-lyon.fr/~ecaron for further information.

CITI Seminar of Alain Tchana (I3S, Université de Nice Sophia-Antipolis) on February 22 at 11AM

Title: Blablabla Virtualisation
Date and Place: 22/02/2019 at 11:00 in TD-C
Host: Jean-Marie Gorce and Florent de Dinechin
Abstract:
This talk will focus on virtualized infrastructure filed. In this domain, I aim at minimizing electricity consumption while improving application performance. To achieve the first goal, I work both at the entire datacenter level (by providing better VM placement strategies) and at the physical machine level (by providing better power management policies). Concerning the second goal, I work both at the VM monitor level (for minimizing its overhead) and at the VM’s operating system (OS) level (for making it aware of the fact that it is virtualized).

 

Biography:
Alain Tchana received his Ph.D. in computer science in 2011 at Institut National Polytechnique de Toulouse. The research topic of his Ph.D. was autonomic computing applied to cloud environments. He then spent two yeas as a postdoc at Université Joseph Fourier. During that time, he worked on building benchmarking systems. From September 2013 to September 2018, he was Associate Professor at Institut National Polytechnique de Toulouse. He was member of SEPIA research group at IRIT laboratory. His main research domain is virtualization. Since September 2018, he is full professor at Université de Nice Sophia-Antipolis. He is member of Scale research group at I3S. He continues to work in the virtualization domain.

CITI Talk: “Wired team presentation and discussions about blockchain”, Stéphane Frenot (INSA-Lyon, CITI) on February 15th at 11am

Title

Wired team presentation and discussions about blockchain

 

Summary

La blockchaines sont des technologies de stockage et de transmission d’informations, permettant la constitution de registres répliqués et distribués, sans organe central de contrôle, sécurisées grâce à la cryptographie, et structurées par des blocs liés les uns aux autres, à intervalles de temps réguliers. Elles sont utilisées par un certain nombre d’acteurs et suscitent de très nombreux débats aussi bien au coin café qu’à l’organisation mondiale du commerce.

En tant que membre du laboratoire de recherche CITI, je me sens concerné par ces technologies, et me demande de ce que nous devons en faire.
Je propose dans ce séminaire de vous présenter ma compréhension des systèmes blockchaines et de vous partager mon point de vue de concepteur d’applications distribuées et pair-à-pair sur le Web.

Bio

Stéphane Frénot a participé à la création du CITI en 2001. Il est spécialisé dans le génie logiciel et les application distribuées. Il a été responsable du thème middleware et de l’équipe INRIA Amazones au laboratoire. Puis il a participé au projet exploratoire INRIA Dice sur les plateformes d’intermédiations. Depuis 1an il est directeur du département Télécommunications Service et Usages de l’INSA où il enseigne le génie logiciel, les systèmes distribués et l’innovation.

Il a travaillé sur les architectures à composants logiciels, les systèmes pairs-à-pairs pour le déploiement de composants et sur un modèle de programmation orienté flux pour Javascript. Il a participé au dépôt de trois brevets : dans l’iot domestique, dans les flux javascript et dans un protocole de vote. Enfin il est responsable du développement de la plateforme Jumplyn de gestion de projets étudiants actuellement en test sur l’INSA pour la gestion des stages.


CITI Talk: “Wireless Networks Design in the Era of Deep Learning: Model-Based, AI-Based, or Both?”, Marco Di RENZO (CR CNRS, L2S) on February 13th at 11am

Title

Wireless Networks Design in the Era of Deep Learning: Model-Based, AI-Based, or Both?

 

Summary

This work addresses the use of emerging data-driven techniques based on deep learning and artificial neural networks in future wireless communication networks. In particular, a key point that will be made and supported throughout the work is that data-driven approaches should not replace traditional design techniques based on mathematical models. On the contrary, despite being seemingly mutually exclusive, there is much to be gained by merging data-driven and model-based approaches. To begin with, a detailed presentation is given for the reasons why deep learning based on artificial neural networks will be an indispensable tool for the design and operation of future wireless communications networks, as well as a description of the recent technological advances that make deep learning practically viable for wireless applications. Our vision of how artificial neural networks should be integrated into the architecture of future wireless communication networks is presented, explaining the main areas where deep learning provides a decisive advantage over traditional approaches. Afterwards, a thorough description of deep learning methodologies is provided, starting with presenting the general machine learning paradigm, followed by a more in-depth discussion about deep learning. Artificial neural networks are introduced as the peculiar feature that makes deep learning different and more performing than other machine learning techniques. The most widely-used artificial neural network architectures and their training methods will be analyzed in detail. Moreover, bridges will be drawn between deep learning and other major learning frameworks such as reinforcement learning and transfer learning. After introducing the deep learning framework, its application

to wireless communication is addressed. This part of the work first provides the state-of-the-art of deep learning for wireless communication networks, and then moves on to address several novel case-studies wherein the use of deep learning proves extremely useful for network design. In particular, the connection between deep learning and model-based approaches is emphasized, proposing several novel techniques for cross-fertilization between these two paradigms. For each case-study, it will be shown how the use of (even approximate) mathematical models can significantly reduce the amount of live data that needs to be acquired/measured to implement data-driven approaches. For each application, the merits of the proposed approaches will be demonstrated by a numerical analysis in which the implementation and training of the artificial neural network used to solve the problem is discussed. Finally, concluding remarks describe those that in our opinion are the major directions for future research in this field.