PhD Defense: “High-performance Coarse Operators for FPGA-based Computing”, by Matei Istoan, on 6th April

Leave a comment
matei

The defense will take place on Thursday 6th April at 14:00 in the Chappe Amphitheater, Claude Chappe building, INSA Lyon.
The presentation will be held in English with slides in English.

Jury

Reviewers

Paolo IENNE, EPFL Lausanne
Roselyne CHOTIN-AVOT, UPMC Paris

Examiners

David THOMAS, Imperial College London
Frédéric PETROT, ENSIMAG, Saint Martin d’Hères
Olivier SENTIEYS, ENSSAT, Lannion

Advisors

Florent DE DINECHIN, INSA Lyon

Abstract

Field-Programmable Gate Arrays (FPGAs) have been shown to sometimes outperform mainstream microprocessors.
The circuit paradigm enables efficient application-specific parallel computations.
FPGAs also enable arithmetic efficiency: a bit is only computed if it is useful to the final result.
To achieve this, FPGA arithmetic shouldn’t be limited to basic arithmetic operations offered by microprocessors.This thesis studies the implementation of coarser operations on FPGAs, in three main directions.New FPGA-specific approaches for evaluating the sine, cosine and the arctangent have been developed.
Each function is tuned for its context and is as versatile and flexible as possible.
Arithmetic efficiency requires error analysis and parameter tuning, and a fine understanding of the algorithms used.

Digital filters are an important family of coarse operators resembling elementary functions: they can be specified at a high level as a transfer function with constraints on the signal/noise ratio, and then be implemented as an arithmetic datapath based on additions and multiplications.
The main result is a method which transforms a high-level specification into a filter in an automated way.
The first step is building an efficient method for computing sums of products by constants.
Based on this, FIR and IIR filter generators are constructed.

For arithmetic operators to achieve maximum performance, context-specific pipelining is required.
Even if the designer’s knowledge is of great help when building and pipelining an arithmetic datapath, this remains complex and error-prone.
A user-directed, automated method for pipelining has been developed.

This thesis provides a generator of high-quality, ready-made operators for coarse computing cores, which brings FPGA-based computing a step closer to mainstream adoption.
The cores are part of an open-ended generator, where functions are described as high-level objects such as mathematical expressions.


PhD Defense: “How to operate IoT networks with contracts of quality of service (Service Level Agreements)”, by Guillaume Gaillard, on 19th December

Leave a comment
g-gaillard

The defense will take place on Monday 19th December at 10:00 in the Chappe Amphitheater, Claude Chappe building, INSA Lyon.
The presentation will be held in French with slides in English.

Jury

Reviewers

Thierry TURLETTI, Inria- Sophia Antipolis
Pascale MINET, Inria- Paris

Examiners

Pascal THUBERT, Cisco Systems
Philippe OWEZARSKI, CNRS- Toulouse
Isabelle GUÉRIN-LASSOUS, Lyon 1 University

Advisors

Dominique BARTHEL, Orange Labs- Meylan
Fabrice VALOIS, INSA Lyon
Fabrice Theoleyre, CNRS- ICube

This thesis work has been done in collaboration between Orange Labs, INSA Lyon, ICube, and Inria UrbaNet in the CITI Lab.

Abstract

With the growing use of distributed wireless technologies for modern services, the deployments of dedicated radio infrastructures do not enable to ensure large-scale, low-cost and reliable communications. This PhD research work aims at enabling an operator to deploy a radio network infrastructure for several client applications, hence forming the Internet of Things (IoT).
We evaluate the benefits earned by sharing an architecture among different traffic flows, in order to reduce the costs of deployment, obtaining a wide coverage through efficient use of the capacity on the network nodes. We thus need to ensure a differentiated Quality of Service (QoS) for the flows of each application.
We propose to specify QoS contracts, namely Service Level Agreements (SLAs), in the context of the IoT. SLAs include specific Key Performance Indicators (KPIs), such as the transit time and the delivery ratio, concerning connected devices that are geographically distributed in the environment. The operator agrees with each client on the sources and amount of traffic for which the performance is guaranteed. Secondly, we describe the features needed to implement SLAs on the operated network, and we organize them into an SLA management architecture.
We consider the admission of new flows, the analysis of current performance and the configuration of the operator’s relays.
Based on a robust, multi-hop technology, IEEE Std 802.15.4-2015 on TSCH mode, we provide two essential elements to implement the SLAs: a mechanism for the monitoring of the KPIs, and KAUSA, a resource allocation algorithm with multi-flow QoS constraints. The former uses existing data frames as a transport medium to reduce the overhead in terms of communication resources. We compare different piggybacking strategies to find a tradeoff between the performance and the efficiency of the monitoring. With the latter, KAUSA, we dedicate adjusted time-frequency resources for each message, hop by hop. KAUSA takes into account the interference, the reliability of radio links and the expected load to improve the distribution of allocated resources and prolong the network lifetime. We show the gains and the validity of our contributions with a simulation based on realistic traffic scenarios and requirements.
Keywords: Quality of Service, Wireless Sensor Networks, Multi-hop, Internet of Things, Service Level Agreement, Key Performance Indicators, Reliability, Network Management, Network Monitoring, Scheduling, 6TiSCH

 


HDR Defense: “Complexity of Ambient Software: from Composition to Distributed, Contextual, Autonomous, Large-scale Execution”, by Frédéric Le Mouël, on 28th November

Leave a comment
flm

The defense will take place on Monday 28th November at 10:00 in the Chappe amphitheatre, Chappe Building, INSA Lyon.

Jury

Reviewers

Pr Thierry DELOT, Valenciennes University
Pr Daniel HAGIMONT, ENSEEIHT
Pr Michel RIVEILL, Nice Sophia Antipolis University

Examiners

Pr Isabelle GUÉRIN LASSOUS, Claude Bernard Lyon 1 University
Pr Fabrice VALOIS, INSA de Lyon
MCF Philippe ROOSE, Pau et Pays de l’Adour University

Abstract

Combined with the development of middleware in the 1990-2000, the Ambient Intelligence has shaped the 2020 scenarios. With a growing number of devices, smartphones, sensors, connected watches, glasses, etc., these scenarios however suffer. With increasing constraints of energy consumption, size and mobility, the deployment, management and programming of these environments have become greatly complex. Middleware present good properties of abstraction – allowing modularity and an efficient reuse – and interconnection – allowing openness and safety of systems. Hence, they play a paramount role in the current deployment of the Internet of Things.

During last years, my contributions have focused on studying, finding solutions to three middleware issues in this context: dynamism, scalability and autonomy. Several platforms have been developed to validate the scientific and technologic choices performed. Jooflux and ConGolo are JVM-based approaches integrating dynamism in-application, either explicitly with ConGolo contextual programming, or implicitly with transparent aspect weaving of Jooflux. AxSeL, ACOMMA and MySIM are service-oriented approaches capturing the dynamism with dynamic and contextual service loading/unloading, collaborative execution, and semantic QoS-based service composition. CANDS allows to manipulate and manage very important information flow on very large service graphs while preserving millisecond-response time. Pri-REIN improves it with quality of service.

An important result has been to show that the autonomy property is strongly correlated with the middleware application domains, and was particularly tested in smart cities with guidance application, in-street parking management and traffic optimisation.

These works have strongly been supported by five defended thesis.

For future works, I will consider specific IoT issues to reduce the human intervention: large-scale initial deployment, safe and secure management, and distributed au autonomous decision-making inferring locally a global behaviour – i.e. Small Data.

 


PhD Defense: “Cooperative communications with Wireless Body Area Networks for motion capture”, by Arturo Jimenez Guizar, on 27th September

Leave a comment
arturo

The defense will take place on Tuesday 27th September at 10:00 in the Chappe amphitheatre, Chappe Building, INSA Lyon.

It will be in French with slides in English

Jury

Reviewers

JULIEN-VERGONJANNE Anne, Limoges University
BERDER Olivier, Rennes 1 University

President of the jury

LE RUYET Didier, Conservatoire National des Arts et Métiers

Examiner

UGUEN Bernard, Rennes 1 University

Advisors

GORCE Jean-Marie, INSA de Lyon
GOURSAUD Claire, INSA de Lyon

Abstract

Wireless Body Area Networks (WBAN) refers to the family of “wearable” wireless sensor networks (WSN) used to collect personal data, such as human activity, heart rate, sleep sequences or geographical position.

This thesis aims at proposing cooperative algorithms and cross-layer mechanisms with WBAN to perform large-scale individual motion capture and coordinated group navigation applications.

For this purpose, we exploit the advantages of jointly cooperative and heterogeneous WBAN under full/half-mesh topologies for localization purposes, from on-body links at the body scale, body-to-body links between mobile users of a group and off-body links with respect to the environment and the infrastructure. The wireless transmission relies on an impulse radio Ultra-Wideband (IR-UWB) radio (based on the IEEE 802.15.6 standard), in order to obtain accurate peer-to-peer ranging measurements based on Time of Arrival (ToA) estimates. Thus, we address the problem of positioning and ranging estimation through the design of cross-layer strategies by considering realistic body mobility and channel variations.

Our first contribution consists in the creation of an unprecedented WBAN measurement database obtained with real experimental scenarios for mobility and channel modelling. Then, we introduce a discrete-event (WSNet) and deterministic (PyLayers) co-simulator tool able to exploit our measurement database to help us on the design and validation of cooperative algorithms. Using these tools, we investigate the impact of nodes mobility and channel variations on the ranging estimation. In particular, we study the “three-way ranging” (3-WR) protocol and we observed that the delays of 3-WR packets have an impact on the distances estimated in function of the speed of nodes. Then, we quantify and compare the error with statistical models and we show that the error generated by the channel is bigger than the mobility error.

In a second time, we extend our study for the position estimation. Thus, we analyze different strategies at MAC layer through scheduling and slot allocation algorithms to reduce the impact of mobility. Then, we propose to optimize our positioning algorithm with an extended Kalman filter (EKF), by using our scheduling strategies and the statistical models of mobility and channel errors. Finally, we propose a distributed-cooperative algorithm based on the analysis of long-term and short-term link quality estimators (LQEs) to improve the reliability of positioning. To do so, we evaluate the positioning success rate under three different channel models (empirical, simulated and experimental) along with a conditional algorithm (based on game theory) for virtual anchor choice. We show that our algorithm improve the number of positions estimated for the nodes with the worst localization performance.

 


PhD Defense: “Data aggregation in Wireless Sensor Networks”, by Jin Cui, on 27th June

Leave a comment
jcui

The defense will take place on Monday 27th June at 14:30 in the Chappe amphitheatre, Chappe Building, INSA Lyon.

The presentation will be in English.

Jury

Reviewers

MINET Pascale, Inria
DIAS DE AMORIM Marcelo, CNRS

Examinators

BEYLOT André-Luc, ENSEEIHT
ROUSSEAU Franck, ENSIMAG
BOUSSETTA Khaled, Université Paris 13

Advisor

VALOIS Fabrice,INSA Lyon

Abstract

Wireless Sensor Networks (WSNs) have been regarded as an emerging and promising field in both academia and industry. Currently, such networks are deployed due to their unique properties, such as self-organization and ease of deployment. However, there are still some technical challenges needed to be addressed, such as energy and network capacity constraints. Data aggregation, as a fundamental solution, processes information at sensor level as a useful digest, and only transmits the digest to the sink. The energy and capacity consumptions are reduced due to less data packets transmission. As a key category of data aggregation, aggregation function, solving how to aggregate information at sensor level, is investigated in this thesis.

We make four main contributions:

Firstly, we propose two new networking-oriented metrics to evaluate the performance of aggregation function: aggregation ratio and packet size coefficient. Aggregation ratio is used to measure the energy saving by data aggregation, and packet size coefficient allows to evaluate the network capacity change due to data aggregation. Using these metrics, we confirm that data aggregation saves energy and capacity whatever the routing or MAC protocol is used.

Secondly, to reduce the impact of sensitive raw data, we propose a data-independent aggregation method which benefits from similar data evolution and achieves better recovered fidelity.

Thirdly, a property-independent aggregation function is proposed to adapt the dynamic data variations. Comparing to other functions, our proposal can fit the latest raw data better and achieve real adaptability without assumption about the application and the network topology.

Finally, considering a given application, a target accuracy, we classify the forecasting aggregation functions by their performances. The networking-oriented metrics are used to measure the function performance, and a Markov Decision Process is used to compute them. Dataset characterization and classification framework are also presented to guide researcher and engineer to select an appropriate functions under specific requirements.


PhD Defense: “Fluxional compiler : seamless shift from development productivity to performance efficiency, in the case of real-time web applications”, by Étienne Brodu, on 21st June

Leave a comment
etienne brodu

The defense will take place at 14.00 in Amphi Chappe, and will be in French with slides in English.

Jury

Reviewers

Gaël THOMAS, Telecom Sud Paris

Frédéric LOULERGUE, LIFO

Examiners

Floréal MORANDAT, LaBRI

Frédéric OBLÉ, Atos Worldline

Advisor

Stéphane FRÉNOT, INSA Lyon

Abstract

Most of the now popular web services started as small projects created by few individuals, and grew exponentially. Internet supports this growth because it extends the reach of our communications world wide, while reducing their latency. During its development, an application must grow exponentially, otherwise the risk is to be outpaced by the competition.

In the beginning, it is important to verify quickly that the service can respond to the user needs: Fail fast. Languages like Ruby or Java became popular because they propose a productive approach to iterate quickly on user feedbacks. A web application that correctly responds to user needs can become viral. Eventually, the application needs to be efficient to cope with the traffic increase.

But it is difficult for an application to be at once productive and efficient. When the user base becomes too important, it is often required to switch the development approach from productivity to efficiency. No platform conciliates these two objectives, so it implies to rewrite the application into an efficient execution model, such as a pipeline. It is a risk as it is a huge and uncertain amount of work. To avoid this risk, this thesis proposes to maintain the productive representation of an application with the efficient one.

Javascript is a productive language with a significant community. It is the execution engine the most deployed, as it is present in every browser, and on some servers as well with Node.js. It is now considered as the main language of the web, ousting Ruby or Java. Moreover, the Javascript event-loop is similar to a pipeline. Both execution models process a stream of requests by chaining independent functions. Though, the event-loop supports the needs in development productivity with its global memory, while the pipeline representation allows an efficient execution by allowing parallelization.

This thesis studies the possibility for an equivalence to transform an implementation from one representation to the other. With this equivalence, the development team can follow the two approaches concurrently. It can continuously iterate the development to take advantage of their conflicting objectives.

This thesis presents a compiler that allows to identify the pipeline from a Javascript application, and isolate its stages into fluxions. A fluxion is named after the contraction between function and flux. It executes a function for each datum on a stream. Fluxions are independent, and can be moved from one machine to the other, so as to cope with the increasing traffic. The development team can begin with the  productivity of the event-loop representation. And with the transformation, it can progressively iterate to reach the efficiency of the pipeline representation.

 

Résumé

La plupart des grands services web commencèrent comme de simples projets, et grossirent exponentiellement. Internet supporte cette croissance en étendant les communications et réduisant leur latence. Pendant son développement, une application doit croître exponentiellement, sans quoi elle risque de se faire dépasser par la compétition.
Dès le début, il est important de s’assurer de répondre aux besoins du marché : Fail fast. Des langages comme Ruby ou Java sont devenus populaires en proposant la productivité nécessaire pour itérer rapidement sur les retours utilisateurs. Une application web qui répond correctement aux besoins des utilisateurs peut être adoptée de manière virale. Mais à terme, une application doit être efficace pour traiter cette augmentation de trafic.
Il est difficile pour une application d’être à la fois productive et efficace. Quand l’audience devient trop importante, il est souvent nécessaire de remplacer l’approche productive pour un modèle plus efficace. Aucune plateforme de développement ne permet de concilier ces deux objectifs, il est donc nécessaire de réécrire l’application vers un modèle plus efficace, tel qu’un pipeline. Ce changement représente un risque. Il implique une quantité de travail conséquente et incertaine. Pour éviter ce risque, cette thèse propose de maintenir conjointement les représentations productives et efficaces d’une même application.
Javascript est un langage productif avec une communauté importante. C’est l’environnement d’exécution le plus largement déployé puisqu’il est omniprésent dans les navigateurs, et également sur certains serveurs avec Node.js. Il est maintenant considéré comme le langage principal du web, détrônant Ruby ou Java. De plus, sa boucle évènementielle est similaire à un pipeline. Ces deux modèles d’exécution traitent un flux de requêtes en chaînant des fonctions les unes après les autres. Cependant, la boucle évènementielle permet une approche productive grâce à sa mémoire globale, tandis que le pipeline permet une exécution efficace du fait de sa parallélisation.
Cette thèse étudie la possibilité pour une équivalence de transformer une implémentation d’une représentation vers l’autre. Avec cette équivalence, l’équipe de développement peut suivre les deux approches simultanément. Elle peut itérer continuellement pour prendre en compte les avantages des deux approches.
Cette thèse présente un compilateur qui permet d’identifier un pipeline dans une application Javascript, et d’isoler chaque étape dans une fluxion. Une fluxion est nommée par contraction entre fonction et flux. Elle exécute une fonction pour chaque datum sur le flux. Les fluxions sont indépendantes, et peuvent être déplacées d’une machine à l’autre pour amortir l’augmentation du trafic. L’équipe de développement peut commencer à développer avec la productivité de la boucle évènementielle. Et avec la transformation, elle peut itérer pour progressivement atteindre l’efficacité du pipeline.

PhD Defense: “A flexible gateway receiver architecture for the urban sensor networks”, by Mathieu Vallérian, on 15th June

Leave a comment
mvallerian

The defense will take place at 10.00 in Amphi Chappe, and will be in French with slides in English.

Jury

Reviewers

Claude DUVANAUD, Université de Poitiers

Patrick LOUMEAU, Télécom ParisTech

Examiners

Christophe MOY, CentraleSupélec

Jean-François DIOURIS, Polytech Nantes

Pierre MADILLO, Orange Labs

Advisors

Guillaume VILLEMAUD, INSA de Lyon

Florin HUTU, INSA de Lyon

Tanguy RISSET, INSA de Lyon

Guest

Benoît MISCOPEIN, CEA-Leti

Abstract

In this thesis, a receiver architecture for a gateway in a urban sensors network was designed. To embed the multiple protocols coexisting in this environment, the best approach seems to use a reconfigurable architecture, following the scheme of the Software-Defined Radio (SDR). All the received signals should be digitized at once by the Analog-to-Digital Converter (ADC) in order to sustain the reconfigurability of the architecture: then all the signal processing will be able to be digitally performed.
The main complication comes from the heterogeneity of the propagation conditions: from the urban environment and from the diversity of the covered applications, the signals can be received on the gateway with widely varying powers. Then the gateway must be able to deal with the high dynamic range of these signals. This constraint applies strongly on the ADC whose resolution usually depends on the reachable digitized frequency band.
A first study is led to evaluate the required ADC resolution to cope with the dynamic range. For this the dynamic range of the signals is first evaluated, then the required resolution to digitize the signals is found theoretically and with simulations. For a 100~dB power ratio between strong and weak signals, we showed that the ADC resolution needed 21 bits which is far too high to be reached with existing ADCs.
Two different approaches are explored to reduce analogically the signals’ dynamic range. The first one uses the companding technique, this technique being commonly used in analog dynamic range reduction in practice (e.g. in audio signals acquisition), its relevance in multiple signal digitization is studied. Three existing compression laws are explored and two implementations are proposed for the most efficient of them. The feasibility of these implementations is also discussed.
In the second approach we propose to use a two-antennas receiver architecture to decrease the dynamic range. In this architecture two digitization paths are employed: the first one digitizes only the strongest signal on the band. Using the information we get on this signal we reconfigure the second branch of the architecture in order to attenuate the strong signal. The dynamic range being reduced, the signals can be digitized with an ADC with a lower resolution. We show in this work that the ADC resolution can de decreased from 21 to 16 bits using this receiver architecture.
Finally, the promising two-antennas architecture is tested in experimentation to demonstrate its efficiency with dynamic signals (i.e. with appearing and disappearing signals).

Résumé

Dans les réseaux de capteurs urbains, les nœuds émettent des signaux en utilisant plusieurs protocoles de communication qui coexistent. Ces protocoles étant en évolution permanente, une approche orientée radio logicielle semble être la meilleure manière d’intégrer tous les protocoles sur la passerelle collectant les données. Tous les signaux sont donc numérisés en une fois.
La grande plage dynamique des signaux reçus est alors le principal problème : ceux-ci peuvent être reçus avec une puissance très variable selon les conditions de propagation. Dans le cas d’une réception simultanée, le Convertisseur Analogique-Numérique (CAN) doit être capable d’absorber une telle dynamique.
Une première étude est menée afin d’établir les caractéristiques requises du CAN sur une passerelle d’un tel réseau de capteurs. La résolution minimale de 21 bits obtenue s’avérant trop importante pour être atteinte au vu de l’état de l’art actuel, deux approches différentes sont explorées pour réduire la plage dynamique des signaux avant la numérisation.
La première approche s’appuie sur la technique du companding. Des lois de compression connues sont explorées afin d’étudier leur viabilité dans le cas de la numérisation de signaux multiples, et deux nouvelles implémentations sont proposées pour la plus performante d’entre elles.
La deuxième technique proposée consiste en une nouvelle architecture de réception utilisant deux voies de réception. La première d’entre elles est dédiée au signal le plus fort sur la bande : celui-ci est démodulé et sa fréquence d’émission est mesurée. À partir de cette mesure, la seconde branche est reconfigurée de manière à atténuer ce signal fort, en réduisant ainsi la plage dynamique. Les autres signaux sont ensuite numérisés sur cette branche avec une résolution du CAN réduite.
Cette deuxième approche semblant plus prometteuse, elle est testée en expérimentation. Sa viabilité est démontrée avec des scénarios de réception de signaux prédéfinis représentant les pires cas possibles.

 


PhD Defense: “From Mobile to Cloud: Using Bio-Inspired Algorithms for Collaborative Application Offloading”, by Roya Golchay, on 26th January

Leave a comment
AS-272491822186518@1441978521737_l

Jury:

Reviewers:
Philippe ROOSE, Maître de Conférences HDR, Université de Pau et des Pays de l’Adour
Sophie CHABRIDON, Maître de Conférences HDR, Télécom SudParis

Examiners:
Bernard TOURANCHEAU, Professeur des Universités, Université Joseph Fourier
Philippe LALANDA, Professeur des Universités, Université Joseph Fourier
Jean-Marc PIERSON, Professeur des Universités, Université Paul Sabatier, Toulouse 3France

Advisors:
Frédéric LE MOUEL, Maître de conférences, INSA de Lyon
Stéphane FRENOT, Professeur des Universités, INSA de Lyon

Summary:

Not bounded by time and place, and having now a wide range of capabilities, smartphones are all-in-one always connected devices – the favorite devices selected by users as the most effective, convenient and necessary communication tools. Current applications developed for smartphones have to face a growing demand in functionalities – from users, in data collecting and storage – from IoT device in vicinity, in computing resources – for data analysis and user profiling; while – at the same time – they have to fit into a compact and constrained design, limited energy savings, and a relatively resource-poor execution environment. Using resource- rich systems is the classic solution introduced in Mobile Cloud Computing to overcome these mobile device limitations by remotely executing all or part of applications to cloud environments. The technique is known as application offloading.

Offloading to a cloud – implemented as geographically-distant data center – however introduces a great network latency that is not acceptable to smartphone users. Hence, massive offloading to a centralized architecture creates a bottleneck that prevents scalability required by the expanding market of IoT devices. Fog Computing has been introduced to bring back the storage and computation capabilities in the user vicinity or close to a needed location. Some architectures are emerging, but few algorithms exist to deal with the dynamic properties of these environments.

In this thesis, we focus our interest on designing ACOMMA, an Ant-inspired Collaborative Offloading Middleware for Mobile Applications that allowing to dynamically offload application partitions – at the same time – to several remote clouds or to spontaneously-created local clouds including devices in the vicinity. The main contributions of this thesis are twofold. If many middlewares dealt with one or more of offloading challenges, few proposed an open architecture based on services which is easy to use for any mobile device without any special requirement. Among the main challenges are the issues of what and when to offload in a dynamically changing environment where mobile device profile, context, and server properties play a considerable role in effectiveness. To this end, we develop bio-inspired decision-making algorithms: a dynamic bi-objective decision-making process with learning, and a decision-making process in collaboration with other mobile devices in the vicinity. We define an offloading mechanism with a fine-grained method-level application partitioning on its call graph. We use ant colony algorithms to optimize bi-objectively the CPU consumption and the total execution time – including the network latency.


PhD Defense: “Impact of a local and autonomous decision on intelligent transport systems at different scales”, by Marie-Ange Lèbre, on 25th January

Leave a comment
mlebre

Jury

Reviewers:
Arnaud De La Fortelle, Mines ParisTech
Abdellah Moudni, Université de Technologie de Belfort-Montbéliard

Examiners:
Farouk Yalaoui, Université Technologique de Troyes
Marco Fiore, CNR-IEIIT, Italie

Advisors:
Frédéric Le Mouël, Insa de Lyon
Stéphane Frénot, Insa de Lyon
Eric Ménard, Valeo

Summary:

In this thesis we present vehicular applications across different scales: from small scale that allows real tests of communication and services; to larger scales that include more constraints but allowing simulations on the entire network. In this context, we highlight the importance of real data and real urban topology in order to properly interpret the results of simulations (production of a real trace). We describe different services using V2V and V2I communication. In each of them we do not pretend to take control of the vehicle, the driver is present in his vehicle, our goal is to show the potential of communication thanks to local decisions. In the small scale, we focus on a service with a traffic light that improves travel times, waiting times and CO2 and fuel consumption. The medium scale is a roundabout, it allows, through a decentralized autonomous and probabilist algorithm, to improve the same parameters. It also shows that with a simple and decentralized decision-making process, the system is robust to packet loss, density, human behavior or equipment rate. Finally on the scale of a city, we show that local and decentralized decisions, with only a partial access to information in the network, lead to results close to centralized solutions. The amount of data in the network is greatly reduced. We also test the response of these systems in case of significant disruption in the network such as accidents, terrorist attack or natural disaster. Models, allowing local decision thanks to information delivered around the vehicle, show their potential whatsoever with the V2I communication or V2V.

Résumé:

“De l’impact d’une décision locale et autonome sur les systèmes de transport intelligent à différentes échelles”.

Les environnements connectés sont en plein essor, grâce aux différents supports technologiques couplés aux nouvelles technologies de l’information et de la communication. Le milieu urbain et véhiculaire ne déroge pas à la règle ; la communication entre les véhicules et l’infrastructure permet d’imaginer une quantité considérable de nouveaux services, rendant la ville toujours plus intelligente et efficace.

Dans cette thèse nous présentons des applications véhiculaires au sein de différentes échelles : de la petite échelle qui permet d’effectuer des tests réels de communication et de service, à des échelles plus grandes incluant plus de contraintes mais permettant des simulations sur l’ensemble du réseau. Dans ce contexte, nous soulignons l’importance de traiter des données réelles (production d’une trace) afin de pouvoir interpréter correctement les résultats des simulations. Nous proposons alors, différents services utilisant les communications V2V et V2I. Dans ces derniers, nous ne prétendons pas prendre le contrôle du véhicule, notre but est de montrer le potentiel et l’impact de prise de décisions locales sur le milieu urbain grâce à la communication véhiculaire. A petite échelle, nous nous focalisons sur un service comprenant un feu de circulation, permettant d’améliorer les temps de parcours et d’attente, ainsi que la consommation en CO2 et en carburant. La moyenne échelle se situant sur un rond-point, permet, grâce à un algorithme décentralisé autonome et probabiliste, d’améliorer ces mêmes paramètres et montre également qu’avec une prise de décision simple et décentralisée, le système est robuste face à la perte de paquet, à la densité, ou encore aux taux d’équipement. Enfin, à l’échelle d’une ville, nous montrons que des décisions prises de manière locale et décentralisée, avec un accès à une information partielle dans le réseau, donnent des résultats proches des solutions centralisées. Ainsi la quantité de données transitant dans le réseau est considérablement diminuée. Nous testons également la réponse de ces modèles en cas de perturbation plus ou moins importante, tels que un accident, un acte terroriste ou encore une catastrophe naturelle. Les modèles permettant une prise de décision locale grâce aux informations délivrées autour d’un véhicule, montrent leur potentiel, que se soit avec la communication V2I ou V2V.