Complex Event Processing over Data Streams
The concept of event processing is established as a generic computational paradigm in various application fields, ranging from data processing in Web environments, over maritime and transport, to finance and medicine. Events report on state changes of a system and its environment. Complex Event Processing (CEP) in turn, refers to the identification of complex/composite events of interest, which are collections of simple events that satisfy some pattern, thereby providing the opportunity for reactive and proactive measures. Examples include the recognition of attacks in computer network nodes, human activities on video content, emerging stories and trends on the Social Web, traffic and transport incidents in smart cities, fraud in electronic marketplaces, etc. The goal of this talk is to first provide an overview of this field and second discuss some major challenges that arise due to the high volume and velocity of the generated event streams. In the end I will discuss the building blocks of our recent system to mitigate the inherent issues in CEP.
Syed Gillani is currently an ATER in CITI INSA Lyon. His research interests are in the broad area of database systems, stream processing, query optimisations and Semantic Web. During his PhD he proposed various techniques to bridge the gap between core Semantic Web concepts and database optimisation techniques. Furthermore, he proposed a new query language and its implementation for the Semantic Complex Event Processing.
Simulation multi-agents et calcul haute performance sur carte
Nombre de systèmes complexes sont aujourd’hui étudiés par simulation
grâce à des modèles basés sur le paradigme multi-agents. Dans ces
modèles, les individus, leur environnement et leurs interactions sont
directement représentés. Ce type de simulation nécessite parfois de
considérer un grand nombre d’entités, ce qui pose des problèmes de
performance et de passage à l’échelle. Dans ce cadre, la programmation
sur carte graphique (GPGPU) est une solution attrayante : elle permet
des gains de performances très conséquents sur des ordinateurs
personnels. Le GPGPU nécessite cependant une programmation extrêmement
spécifique qui limite à la fois son accessibilité et la réutilisation
des développements réalisés, ce qui est particulièrement vrai dans le
contexte de la simulation multi-agents. Dans cet exposé, nous
présenterons cette technologie et les travaux de recherche que nous
avons réalisés afin de pallier ces difficultés. Nous décrirons en
particulier une méthode de conception, appelée délégation GPU, qui
permet (1) d’adapter les modèles multi-agents au contexte du GPGPU et de
(2) faciliter la réutilisation des développements associés.
Fabien Michel est titulaire d’un doctorat en informatique obtenu à
l’Université de Montpellier en 2004. De 2005 à 2008, il a exercé en tant
que maître de conférences au CReSTIC de Reims avant de rejoindre le
Laboratoire d’Informatique, de Robotique et de Microélectronique de
Montpellier (LIRMM) où il exerce actuellement. Ses recherches
s’inscrivent principalement dans le domaine de la modélisation et de la
simulation de systèmes multi-agents (SMA) et reposent sur la proposition
de modèles formels et conceptuels (e.g. le modèle IRM4S) et d’outils
logiciels génériques (plates-formes MaDKit et TurtleKit), ainsi que sur
leur utilisation dans divers domaines tels que le jeu vidéo, le
traitement numérique de l’image ou la robotique collective. Plus
spécifiquement, le fil rouge de ses travaux, synthétisé dans son HDR
obtenue en 2015, repose sur une approche dite
« environement-centrée » (E4MAS) : contrairement aux approches centrées
sur la conception des comportements individuels, il s’agit de considérer
l’environnement des agents comme une abstraction de premier ordre dont
le rôle est primodial. En particulier, il a récemment décliné cette
démarche afin de proposer une approche originale dans le cadre de
l’utilisation du calcul haute performance sur carte graphique (GPGPU)
pour la simulation de SMA.
Gabriela Czibula, Prof., and Istvan Czibula, Ass. Prof, University of Babes-Bolyai, Computer Science Dept will present “Machine Learning for Solving Software Maintenance and Evolution Problems + Presentation of the Faculty of Mathematics and Computer Science, Babes-Bolyai University and MLyRE Research Group” at the CITI Lab on Monday, July 10th 2017.
There has been a growing interest on understanding and optimizing Wireless Sensor Network MAC protocols in recent years, where the limited and constrained resources have driven research towards primarily reducing energy consumption of MAC functionalities. In this talk, we expose the prime focus of WSN MAC protocols, design guidelines that inspired these protocols, as well as drawbacks and shortcomings of the existing solutions and how existing and emerging technology will influence future solutions.
Abdelmalik Bachir received the graduate degree from the National Institute of Informatics, Algiers, Algeria, in 2001, the DEA diploma in informatics from the University of Marseille, France, in 2002, and the PhD degree from Grenoble Institute of Technology, France, in 2007. He took research positions at Avignon University, France Telecom R&D, Grenoble Institute of Technology, Imperial College London, as well as CERIST Research Centre, Algiers. Currently, he is a professor at Biskra University, Algeria and a consultant at Imperial Innovations. His research interests include: MAC and Routing protocols for wireless networks, wireless network deployment optimisation, mobile user mobility profiling, and inter-vehicle communication.
In this talk, the class of anti-uniform Huffman (AUH) codes for anti-uniform sources with finite and infinite alphabets is considered. The characteristics of such sources as well as Huffman codes for such sources are first recalled. The sequence of bits corresponding to the output of a anti-uniform source encoded with a Huffman code is modeled by a Markov source. Its characteristics are derived from the encoding procedure describing the Huffman code. The Huffman encoding process is viewed as a transmission through a channel, which input would be the input symbols, and its output, the output bits.
The class of AUH sources is known for their property of achieving minimum redundancy in several situations. It has been shown that AUH codes potentially achieve the minimum redundancy of a Huffman code of a source for which the probability of one of the symbols is known. The AUH codes are efficient in highly unbalanced cost regime, with minimal average cost among all prefix–free codes. These properties determine a wide range of applications and motivate for the study of these sources from the information theory perspective.
Starting from the AUH structure, the average codeword length, the code entropy and the average cost are derived. These results are customized for finite and infinite sources with different distributions (Poisson, negative binomial, geometric and exponential).
Daniela Tarniceriu (PhD. 1997) is a full professor at the Technical University “Gh. Asachi” of Iasi, Romania since 2001. Her research interests are in the fields of information theory, digital signal processing, statistical signal processing, data compression and encryption. She is the co-author of 8 books, 85 journal papers and 65 conference papers. She was involved in several research grants: two as scientific leader, two as coordinator, and 12 as scientist.
Since 2016, she is the Dean of the Faculty of Electronics, Telecommunications and Information Technology (ETTI) of the Technical University “Gh. Asachi” of Iasi, Romania and between 2008 and 2016 she was the head of the “Telecommunications” Department of the ETTI. Since 2013 she is the head of the Doctoral School of the ETTI.
Recent advances in synthetic biology and chemistry are making it possible to form
networks consisting of nanoscale devices—known as nanomachines—with
applications in medicine and environmental protection. These nanoscale devices,
often known as nanomachines, have a limited ability to sense their environment,
communicate and take simple actions. A key potential application is therefore event
detection, where the nanoscale network seeks to identify the presence of an
undesirable state, such as markers of an illness.
To support event detection, the nanoscale network must be able to communicate
observations from sensing nanomachines to a fusion center, where a decision can
be made. Due to strict size and energy constraints, this communication is a
challenging problem. Recently, a new approach known as molecular communication
has been proposed, where information is encoded in the state of molecules, such as
the release time, number, or type of molecules, which diffuse from the transmitter to
the receiver through a fluid. This new medium has dramatically different features
than traditional electromagnetic and accoustic media, which requires new channel
models, as well as encoding and decoding strategies.
In this seminar, I will introduce the principles of molecular communication,
highlighting the differences from traditional communication schemes. I will then show
how molecular communication can support collaboration in nanoscale networks. In
particular, I will present a new event detection scheme for nanoscale networks,
which accounts for the unique characteristics of the underlying molecular
communication links—known as the anomalous diffusion channel.
Mai Cong Trang currently is PhD candidate in Molecular Communications under the
supervision of Dr. Trung Q. Duong at Queen’s University Belfast and Dr. Malcolm
Egan at INSA Lyon. He received the B.S. degree in Electronic and Electrical
Engineering in 2008 at Le Quy Don Technical University, Vietnam. Then, in 2013, he
received the M.S. degree in Electronics and Communications Engineering at The
University of Electro‐Communications, Japan. His current research interests include
Molecular Communications, Nanomachine Networks and Bio-inspired Networks.
In this talk, Jean-Michel Fourneau will present some analytic solutions of Queueing Network models that jointly model the data packets and the energy consumed by the transmission and reception. These models are based on the energy discretization, we talk then about energy Packets.
Jean-Michel Fourneau is Professor of Computer Science at the University of Versailles St Quentin, France since 1992. He was formerly with Ecole Nationale des Telecommnications, Paris and University of Paris XI Orsay as an Assistant Professor. He graduated in Statistics and Economy from Ecole Nationale de la Statistique et de l’Administation Economique, Paris and he obtained is PHD
and his habilitation in Computer Science at Paris XI Orsay in 87 and 91 respectively. He is a member of IFIP WG7.3. His recent research interests are
algorithmic performance evaluation, Stochastic Automata Networks, G-networks, stochastic bounds, and application to high speed networks, all optical networks and energy consumption.
POETS : partially ordered event triggered systems
The POETS project is a five-year effort to build a combined software and hardware system which allows applications to be split into 1M+ concurrent state machines, and then to execute them on 100K+ concurrent hardware threads across 100+ tightly-coupled compute nodes. To achieve this we use an event-driven compute system with no global barriers or shared state, and re-write applications to use globally asynchronous algorithms. This talk will give an overview of the hardware that is being built, and show how applications such as finite-volume solvers can be re-cast as a asynchronous system.