Invited Speakers

Plenary Session

Jim Keller

University of Missouri

July 19th, 2022

Streaming Data Analytics: Clustering or Classification?

Abstract

As the volume and variety of temporally acquired data continues to grow, increased attention is being paid to streaming analysis of that data. Think of a drone flying over unknown terrain looking for specific objects which may present differently in different environments. Understanding the evolving environments is a critical component of a recognition system. With the explosion of ubiquitous continuous sensing (something Lotfi Zadeh predicted as one of the pillars of Recognition Technology in the late 1990s), this on-line streaming analysis is normally cast as a clustering problem. However, examining most streaming clustering algorithms leads to the understanding that they are actually incremental classification models. These approaches model existing and newly discovered structures via summary information that we call footprints. Incoming data is routinely assigned crisp labels (into one of the structures) and that structures footprints are incrementally updated; the data is not saved for iterative assignments.

The three underlying tenets of static clustering:

1. Do you believe there are any clusters in your data?

2. If so, can you come up with a technique to find the natural grouping of your data?

3. Are the clusters you found good groupings of the data?

These questions do not directly apply to the streaming case.  What takes their place in this new frontier?

In this talk, I will provide some thoughts on what questions can substitute for the Big 3, but then focus on a new approach to streaming classification, directly acknowledging the real identity of this enterprise.  Because the goal is truly classification, there is no reason that these assignments need to be crisp. With my friends, I propose a new streaming classification algorithm, called StreamSoNG, that uses Neural Gas prototypes as footprints and produces a possibilistic label vector (typicalities) for each incoming vector. These typicalities are generated by a modified possibilistic k-nearest neighbor algorithm.  Our method is inspired by, and uses components of, a method that we introduced under the nomenclature of streaming clustering to discover underlying structures as they evolve. I will describe the various ingredients of StreamSoNG and demonstrate the resulting algorithms on synthetic and real datasets.

Short Bio

James M. Keller received the Ph.D. in Mathematics in 1978. He is now the Curators’ Distinguished Professor Emeritus in the Electrical Engineering and Computer Science Department at the University of Missouri. Jim is an Honorary Professor at the University of Nottingham. His research interests center on computational intelligence: fuzzy set theory and fuzzy logic, neural networks, and evolutionary computation with a focus on problems in computer vision, pattern recognition, and information fusion including bioinformatics, spatial reasoning in robotics, geospatial intelligence, sensor and information analysis in technology for eldercare, and landmine detection. His industrial and government funding sources include the Electronics and Space Corporation, Union Electric, Geo-Centers, National Science Foundation, the Administration on Aging, The National Institutes of Health, NASA/JSC, the Air Force Office of Scientific Research, the Army Research Office, the Office of Naval Research, the National Geospatial Intelligence Agency, the U.S. Army Engineer Research and Development Center, the Leonard Wood Institute, and the Army Night Vision and Electronic Sensors Directorate. Professor Keller has coauthored over 500 technical publications.

Jim is a Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a Fellow of the International Fuzzy Systems Association (IFSA), and a past President of the North American Fuzzy Information Processing Society (NAFIPS). He received the 2007 Fuzzy Systems Pioneer Award and the 2010 Meritorious Service Award from the IEEE. Jim won the 2021 IEEE Frank Rosenblatt Technical Field Award. He has been a distinguished lecturer for the IEEE CIS and the ACM. Jim finished a full six year term as Editor-in-Chief of the IEEE Transactions on Fuzzy Systems, following by being the Vice President for Publications of the IEEE Computational Intelligence Society from 2005-2008, then as an elected CIS Adcom member, and finished another term as VP Pubs (2017-2020). He is President of IEEE CIS for 2022 – 2023. He was the IEEE TAB Transactions Chair as a member of the IEEE Periodicals Committee, and was a member of the IEEE Publication Review and Advisory Committee from 2010 to 2017. Among many conference duties over the years, Jim was the general chair of the 1991 NAFIPS Workshop, the 2003 IEEE International Conference on Fuzzy Systems, and co-general chair of the 2019 IEEE International Conference on Fuzzy Systems.

Paul Werbos

Applied Computational Intelligence Laboratory, Missouri University of Science and Technology, USA

July 20th, 2022

Neural Network Revolutions, Past and Future: From BP to Quantum Artificial General Intelligence QAGI

Abstract

The neural network field has experienced three massive revolutions, starting from 1987, when IEEE held the first International Conference on Neural Networks (ICNN), leading NSF to create the Neuroengineering research program which I ran and expanded from 1988 to 2014. This first period of growth already saw a huge proliferation of important new applications in engineering, such as vehicle control, manufacturing and partnerships with biology; see Werbos, “Computational intelligence from AI to BI to NI,” in Independent Component Analyses, Compressive Sampling, Large Data Analyses (LDA), Neural Networks, Biosystems, and Nanoengineering XIII, vol. 9496, pp. 149-157. SPIE, 2015.

IEEE conferences were the primary intellectual center of the new technology, relying most on generalized backpropagation (including backproapagation over time and backpropagation for deep learning) and on a ladder of neural network control designs, rising up to “reinforcement learning” (aka adaptive critics, or adaptive dynamic programming.)

The second great revolution resulted from a paradigm shifting research program, COPN, which resulted from deep dialogue and voting across research program directors at NSF: National Science Foundation (2007), Emerging Frontiers in Research and Innovation 2008 (https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm). In that program, I funded Andrew Ng and Yann LeCun to test neural networks on crucial benchmark challenges in AI and computer science. After they demonstrated to Google that neural networks could outperform classical methods in AI, Google announced a new product which set off a massive wave of “new AI” in industry and in computer science.In computer science, this added momentum to the movement for Artificial General Intelligence (AGI), which our reinforcement learning designs already aimed at. However, there are levels and levels of generality, even in “general intelligence” (https://arxiv.org/pdf/q-bio/0311006.pdf). We now speak of Reinforcement Learning and Approximate Dynamic Programming (RLADP). In WCCI 2014, held in Beijing, I presented details roadmaps of how to rise up from the most powerful methods popular in computer science even today, up to intelligence as general as that of the basic mammal brain. See From ADP to the brain: foundations, roadmap, challenges and research priorities. In 2014 International Joint Conference on Neural Networks (IJCNN) (pp. 107-111). IEEE (https://arxiv.org/pdf/1404.0554.pdf). This led to intense discussions at Tsinghua and at the NSF of China, which led to new research directions in China where there have been massive new applications beyond what most researchers in the west consider possible. (See http://1dddas.org/ for the diverse and fragmented communities in the west. Also do a patent search on Werbos for details of how to implement higher level classical AGI.) Werbos and Davis (2016) shows how this view of intelligence fits real-time data from rat brains better than older paradigms for brain modeling. This year, we have opened the door to a new revolution. Just as adaptive analog networks, neural networks, massively and provably open up capabilities beyond old sequential Turing machines, the quantum extension of RLADP offers power far beyond what the usual Quantum Turing Machines (invented by David Deutsch) can offer. It offers true Quantum AGI, which can multiply capabilities by orders of magnitude in and application domain which requires higher intelligence, such as observing the sky, management of complex power grids, and “quantum bromium” (hard cybersecurity). See “Quantum technology to expand soft computing.” Systems and Soft Computing 4 : 200031. https://www.sciencedirect.com/science/article/pii/S2772941922000011 and links on internet issues at build-a-world.org.

Short Bio

Paul Werbos is best known (and most cited) for the original discovery of backpropagation, and for the theorem establishing its validity, as part of his PhD thesis in Applied Mathematics for Harvard in 1974. Even before 1974, he had developed backpropagation as one element of a more general approach to reinforcement learning (http://vixra.org/abs/1902.0046), which combined a new way to learn to approximate dynamic programming with key insights from Freud’s theory of how learning works in neurons of the brain. He inaugurated the field which we now know of as RLADP, Reinforcement Learning and Approximate Dynamic Programming, building on this earlier work, his later papers, and on the research area of Adaptive and Intelligent Systems at NSF which he led from 1988 to 2015. This included neural networks, adaptive fuzzy logic, and a major paradigm shift in how to understand intelligence in the brain, which has passed tests on the best real-time brain data. (Werbos and Davis 2016). Backpropagation and RLADP are the main foundations of the new deep learning revolution, which can be traced back to a research program he started at NSF, Cognitive Optimization and Prediction (COPN), and to an award he pushed there to Andrew Ng and Yann LeCun, whose success stories they conveyed to Sergey Brin at Google. AIS also led to more powerful and advanced developments and applications of RLADP, where massive new breakthroughs are still appearing, in areas from electric power, to the control of air and ground vehicles, and in new options for quantum technology for observing the sky, quantum RLADP and cybersecurity. These are reviewed in his new paper, Quantum Technology to Expand Soft Computing, in Systems and Soft Computing (Elsevier), December 2022 https://www.sciencedirect.com/science/article/pii/ S2772941922000011 . At the request of Kumar Venayagamoorthy, IEEE/Wiley Series Editor for Power and Energy, he has probed deeper into how climate change actually does pose a threat to human species existence, and how new IEEE technology and market design could be combined to solve these problems at a much lower cost than any of the well-known existing approaches. These efforts have only just begun, but they have demonstrated already how IEEE technologies (which include market design as in electric power) would allow vastly more impact than present climate policies, at lower cost, and faster. (build-a- world.org.). He has also been active for decades in IEEEUSA, in the planning committee of the Millennium Project (www.millennium-project.org), and in the National Space Society. In 2009, as a legislative fellow handling climate and many other areas of science for Senator Specter, he learned the realities of many S&T challenges facing the world today.

Michael Bronstein

University of Oxford, UK

July 21st, 2022

Physics-inspired learning on graphs

Abstract

The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. I argue that the very “node-and-edge”-centric mindset of current graph deep learning schemes may hinder future progress in the field. As an alternative, I propose physics-inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

Short Bio

Michael Bronstein is the DeepMind Professor of AI at the University of Oxford and Head of Graph Learning Research at Twitter. He was previously a professor at Imperial College London and held visiting appointments at Stanford, MIT, and Harvard, and has also been affiliated with three Institutes for Advanced Study (at TUM as a Rudolf Diesel Fellow (2017-2019), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton as a short-time scholar (2020)). Michael received his PhD from the Technion in 2007. He is the recipient of the Royal Society Wolfson Research Merit Award, Royal Academy of Engineering Silver Medal, five ERC grants, two Google Faculty Research Awards, and two Amazon AWS ML Research Awards. He is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019).

Bing Xue

Victoria University, Wellington

July 22nd, 2022

Evolutionary Computation for Automated Design of Deep Neural Networks

Abstract

Deep neural networks (DNNs) have achieved great success in a wide range of challenging tasks in recent years, such as image classification and natural language processing. The deep architectures in DNNs play a crucial role in their performance, via learning meaningful features directly from the raw data without explicit feature engineering. However, many powerful DNN architectures are manually designed, which requires rich expertise and experience in both the DNN and the target problem domains, but they are often not available to interested users in reality. Neural architecture search (NAS) can address this issue by automatically designing DNN architectures, where evolutionary computation (EC) based methods (i.e. evolutionary NAS, ENAS) have recently gained much attention and success. In this talk, I will mainly take image classification as the application area to discuss how EC methods, such as genetic algorithms and particle swarm optimisation, can be used to achieve NAS, and the key components in designing an ENAS algorithm, such as individual representation, fitness function and search mechanism. State-of-the-art ENAS algorithms will be reviewed and discussed in terms of how they achieve improved performance in terms of the classification accuracy, model complexity, computational efficiency, etc . Finally, the talk will also cover current challenging issues and open research questions in this area.

Short Bio

She is currently Professor of Artificial Intelligence, and Deputy Head of School in the School of Engineering and Computer Science at Victoria University of Wellington. She has over 300 papers published in fully refereed international journals and conferences. Her research focuses mainly on evolutionary computation, machine learning, evolving deep neural networks, feature selection/construction, image classification, other related areas and their real-world applications.

Prof. Xue has been involved in organising over 20 international conferences, for example as the conference chair of CEC in IEEE WCCI 2024, tutorial chair of IEEE WCCI 2022, workshop co-chair of Workshop Chair of IEEE ICDM 2021, and finance chair of IEEE CEC 2019.  Prof. Xue is currently the Chair of Evolutionary Computation Technical Committee in IEEE Computational Intelligence Society (CIS), Chair of IEEE CIS Task Force on Evolutionary Deep Learning and Applications, and Editor of IEEE CIS Newsletter. She has also served as associate editor of several international journals, such as IEEE Transactions on Evolutionary Computation, IEEE Computational Intelligence Magazine, IEEE Transactions on Artificial Intelligence, and IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI).

Tinne Tuytelaars

KU Leuven, Belgium

July 23rd, 2022

Keep on learning without forgetting

Abstract

A core assumption behind most machine learning methods is that training data should be representative for the data seen at test time. While this seems almost trivial, it is, in fact, a particularly challenging condition to meet in real world applications of machine learning: the world evolves and distributions shift over time in an unpredictable way (think of changing weather conditions, fashion trends, social hypes, wear and tear, etc.). This means models get outdated and in practice need to be re-trained over and over again. A particular subfield of machine learning, known as continual learning, aims at addressing these issues. The goal is to develop learning schemes that can learn from non-i.i.d. distributed data. The challenges are to realise this without storing all the training data (ideally none at all), with fixed memory and model capacity, and without forgetting concepts learned previously. In this talk, I will give an overview of recent work in this direction, with a focus on learning deep models for computer vision.

Short Bio

Tinne Tuytelaars is professor at KU Leuven, Belgium, working on computer vision and, in particular, topics related to image representations, vision and language, continual learning, dynamic architectures and more. She has been program chair for ECCV14 and CVPR21, and general chair for CVPR16. She also served as associate- editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence and is on the editorial board of the International Journal of Computer Vision. She was awarded an ERC Starting Grant in 2009, an ERC Advanced Grant in 2021 and received the Koenderink test-of-time award at ECCV16.

FUZZ-IEEE 2022

Kazuo Tanaka

University of Electro-Communications, Tokyo, Japan

July 19th, 2022

Fuzzy Control Systems Design and Analysis: Past, Present and Future

Abstract

This talk will present advances in fuzzy control systems design and analysis that enabled a shift from model-free control to model-based control. After a brief review of the fuzzy control history, the talk will start from the origin in 1985, when Takagi and Sugeno published their seminal work proposing a new type of control-oriented model representation. The first part of this talk will give an overview of the Takagi-Sugeno fuzzy model-based control approach. A feature of the control approach is that any nonlinearities are completely captured by the Takagi-Sugeno fuzzy models and the nonlinearities are fully embedded into membership functions in the Takagi-Sugeno fuzzy models. The feature renders a simple, natural and effective design procedure as alternatives or supplements to other nonlinear control techniques that require special and rather involved knowledge. This part will mainly review the feature of the Takagi-Sugeno fuzzy model-based control. The second part of this talk will address system-theoretical approaches of controller designs utilizing linear matrix inequalities (LMIs) and more recently sum of squares (SOS). The LMI design approach has enjoyed great success and popularity. However, there exist some drawbacks, e.g., difficulties in casting the control problems in terms of LMIs, conservative nature in some LMI design conditions, to name a few. To overcome the difficulties of the LMI design approach, a new approach based on SOS was proposed as a post-LMI design approach. This part will mainly outline innovative and significant advances in the SOS design approach. The final part of this talk will briefly introduce future applications to real flight control for two types of unmanned aerial vehicles (UAVs) considered to be some of the most challenging nonlinear control problems. Throughout the talk, it will be reflected upon how the fuzzy model-based control approach has enriched the design and analysis of nonlinear control systems by bridging from the “nonlinear” world to the “fuzzy” world.

Short Bio

Professor Kazuo Tanaka is currently a Professor in Department of Mechanical and Intelligent Systems Engineering at the University of Electro-Communications, Tokyo, Japan. He received his Ph. D. in Systems Science from Tokyo Institute of Technology in 1990. He was a Visiting Scientist in Computer Science at University of North Carolina at Chapel Hill in 1992 and 1993. From his professional societies, he has received several prestigious awards including IFAC World Congress Best Poster Paper Prize in 1999, IEEE Transactions on Fuzzy Systems Outstanding Paper Award in 2000, IEEE Computational Intelligence Society (CIS) Fuzzy Systems Pioneer Award in 2021, to name a few. He has been annually selected in Stanford University List of Top 2% Scientists Worldwide in recent years. His research interests include fuzzy systems control, nonlinear systems control and their applications to unmanned aerial vehicles. According to Google Scholar, his journal publications currently report over 29,900 citations, with an h-index of 56 and i10-index of 148. He is an IEEE Fellow and an IFSA Fellow.

Gary Feng

City University of Hong Kong

July 20th, 2022

Model Based Fuzzy Control and Universal Fuzzy Controllers

Abstract

This talk first gives a brief review on fuzzy control, particularly, model based fuzzy control. It then discusses universal fuzzy controller problems for continuous-time multi-input-multi-output general nonlinear systems based on a class of generalized dynamic fuzzy dynamic models, often called Takagi-Sugeno fuzzy models. It is shown that this class of generalized dynamic fuzzy models can be used to approximate general nonlinear systems. By using their approximation capability, several results on universal fuzzy controllers for general nonlinear systems are then provided. Finally, some challenges in model based fuzzy control are also revealed.

Short Bio

Gang Feng received the B.Eng and M.Eng. Degrees in Automatic Control from Nanjing Aeronautical Institute, China in 1982 and in 1984 respectively, and the Ph.D. degree in Electrical Engineering from the University of Melbourne, Australia in 1992. Professor Feng was a Lecturer in Royal Melbourne Institute of Technology, 1991 and a Senior Lecturer/Lecturer, University of New South Wales, 1992-1999. He has been with City University of Hong Kong since 2000, where he is now a Chair Professor of Mechatronic Engineering. He has received ChangJiang Chair Professorship award conferred by Ministry of Education, Alexander von Humboldt fellowship, the IEEE Computational Intelligence Society Fuzzy Systems Pioneer Award, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award, the outstanding research award and President award of CityU, and several best conference paper awards. He is listed as a SCI highly cited researcher by Clarivate Analytics, 2016-2021. His research interests include intelligent systems and control, networked control systems, and multi-agent systems and control. Professor Feng is a fellow of IEEE. He has been the Associate Editor of IEEE Trans. Automatic Control, IEEE Trans. on Fuzzy Systems, IEEE Trans. Systems, Man, & Cybernetics, Mechatronics, Journal of Systems Science and Complexity, and Journal of Control Theory and Applications. He is also on the advisory board of Unmanned Systems.

Gabriella Pasi

University of Milano-Bicocca

July 21st, 2022

Efficient Languages for Knowledge Representation
and Approximate Reasoning

Abstract

Representing knowledge and reasoning with it is one of the core aspects of Artificial Intelligence since its foundations. Since the first approaches defined to cope with this issue, the languages defined to the purpose of formally representing knowledge have evolved over time, also leveraged by the birth and goals of the Semantic Web, with its ontology related languages, and, more recently, of knowledge graphs. In this context, this talk addresses the issue of defining efficient languages for representing knowledge and performing approximate reasoning with it. After a synthetic overview of the main categories of languages, the OSF formalism and an efficient way to implement it will be introduced, as well as a proposal to cope with fuzziness in knowledge management in this context.

Short Bio

Gabriella Pasi is a Professor at the Department of Computer Science, Systems and Communication (DISCo) of the University of Milano-Bicocca, where she leads the Information and Knowledge Representation, Retrieval, and Reasoning (IKR3) research lab. Her main research interests include Natural Language Processing (particularly in relation to the tasks of Information Retrieval and Information Filtering), Knowledge Representation and Reasoning, User Modelling, and Social Media Analytics. She is Associate Editor of several international journals and has participated in the organization of several international events, both in the role of general and program chair. She has been both coordinator and PI of several international research projects. She has published more than 250 papers in international journals and books and in the proceedings of international conferences. She is a Fellow of the International Fuzzy Systems Association (IFSA) and co-director of the ELLIS Unit in Milan (European Laboratory for Learning and Intelligent Systems).

Lawrence O. Hall

University of South Florida

July 22nd, 2022

Medical Image Data Analysis Can Use a Fuzzy Boost

Abstract

Many types of medical images have imprecision in them. For example, medical image acquisition parameters may vary, the description of the images may be incomplete and/ or imprecise. The labels used by clinicians may approximately match the diagnostic label. Data collection may be done by classes without looking at images. You can then have bias in data used for training. As an example, a large published Covid-19 X-ray dataset has non-covid subjects mostly standing and Covid-19 patients mostly imaged lying down by a portable X-ray machine. A learned model got 99% accuracy. However, what did it learn? Can we explain it crisply or using fuzzy terms? Labeling medical data is expensive because you need experts to be involved. Fuzzy clustering and labeling of groups can speed the process and save money. However, the clusters need to have real-world meaning from a reasonable set of features. Clusters may not be homogenous which means images in them truly fuzzily belong. If you can explain, a deep learned model will it be a fuzzy explanation or a pseudo probabilistic one or something else? This talk will explore the use and potential use of fuzzy approaches in medical image analysis which, today, is being driven by different types of neural networks that have feature extraction capabilities.

Short Bio

LAWRENCE O. HALL is a Distinguished University Professor in the Department of Computer Science and Engineering at University of South Florida and the co-Director of the Institute for Artificial Intelligence + X. He is the 2021-2 IEEE Vice President for Publications, Products and Services. He received his Ph.D. in Computer Science from the Florida State University in 1986 and a B.S. in Applied Mathematics from the Florida Institute of Technology in 1980. He is a fellow of the IEEE. He is a fellow of the AAAS, AIMBE and IAPR. He received the Norbert Wiener award in 2012, the Joseph Wohl award in 2017 from the IEEE SMC Society, the 2021 Fuzzy Pioneer Award from the IEEE Computational Intelligence Society. He is a past President of the IEEE Systems, Man and Cybernetics Society, former EIC of what is now the IEEE Transactions on Cybernetics. He is on the editorial boards of the Proceedings of the IEEE and IEEE Spectrum. His research interests lie in learning from big data, distributed machine learning, medical image understanding, bioinformatics, pattern recognition, modeling imprecision in decision making, and integrating AI into image processing. He continues to explore un and semi-supervised learning using scalable fuzzy approaches. He has authored or co-authored over 100 publications in journals, as well as many conference papers and book chapters. He has received over $6M in research funding from agencies such as the National Science Foundation, National Institutes of Health, Department of Energy, DARPA, and NASA.

Laszlo T. Koczy

Szechenyi Istvan University and Budapest University of Technology and Economics, Hungary

July 23rd, 2022

Discrete Bacterial Memetic Evolutionary Algorithms for solving high complexity problems

Abstract

Evolutionary algorithms attempt to copy the solutions nature offers for solving (in the quasi-optimal sense) intractable problems, whose exact mathematical solution is impossible. The prototype of such algorithms is the Genetic Algorithm, which is, however rather slow and often does not find a sufficient solution. Nawa and Furuhashi proposed a more efficient modified one, under the name of Bacterial Evolutionary Algorithm (BEA). Moscato proposed the combination of evolutionary global search with nested local search based on traditional optimization techniques, and called the new approach memetic algorithm (MA). Our group started to combine BEA with Levenberg-Marquardt local search and we obtained very good results on a series of benchmarks. The next step was to apply the new type of MA for NP-hard discrete optimization, starting with the classic and well known Traveling Salesman Problem (TSP), applying discrete local search, and thus proposing the novel Discrete Bacterial Memetic Evolutionary Algorithm (DBMEA). Then, we continued with a series of related, but mathematically different graph search problems, applying the same approach. Although we could not improve the tailor made Helsgaun-Lin-Kernighan (HLK) heuristics for the basic TSP, we got comparably good results, and in some other problem cases, we obtained new, so far the best accuracy and running time combinations. The Traveling Repairman Problem is an eminent example, where DBMEA delivers the best solutions. The advantages of the new approach are as follows: – General applicability. With minimal adaptation to the concrete problem type the same method could be successfully applied, there was no need to construct new tailor made algorithms for every new problem – Predictability. Knowing the problem size, it was easy to give a good estimation of the running time, assuming a certain accuracy. This is not true for any of the other approaches, including the HLK, and especially not true for other methods, finding approximate solutions (often with large error) In the talk, several examples will be presented with standard benchmarks going up to large numbers of graph nodes, and the DBMEA results will be compared with the best practices from the literature. The predictability feature will also be illustrated by size-running time graphs. Reference will be made to the importance of determining the initial population in achieving fast and accurate results. A new approach, the Bounded Radius Heuristics will be presented. In the last part of the talk, a series of fuzzy extensions of the Time Dependent TSP (TD TSP) will be introduced, an extension of the TSP with real life aspects where the natural fluctuation of the traffic in certain areas causes non-deterministic features causing additional difficulties in the quasi-optimization. The novel extensions will be also tackled with the DBMEA approach successfully. As a conclusion, one more example will be mentioned where the discrete NP-hard problem is of a rther different nature, and it will be shown that by changing the local search technique appropriately, DBMEA can still deliver superior results.

Short Bio

Laszlo T. Koczy received the Ph.D. degree from the Technical University of Budapest in 1977, and the D.Sc. (a postdoctoral degree) from the Hungarian Academy of Science in 1998. He spent his career at BME until 2001, and from 2002 at Szechenyi Istvan University (Gyor, SZE), where he was Dean of Engineering, and has been from 2013 to current President of the University Research and of the University Ph.D. Councils. He is a member of the National Doctoral Council, a member, by appointment of the Prime Minister, of the Hungarian Accreditation Committee (for Higher Education), where he chairs the Engineering Sciences Board, the Council for Professors. He has been a visiting professor in Australia (including UNSW), Japan, Korea Austria and Italy. He was Vice President, and then President of IFSA, and has been a Council Member ever since. He was an IEEE CIS AdCom member for two cycles and CIS representative on the Neural Networks Council AdCom for another two times. He was the founder of the Hungarian Fuzzy Association, and is now the Life Honorary President of HFA. His main research activities have been in the field of Computational Intelligence, especially in fuzzy systems, evolutionary and memetic algorithms, and neural networks, as well as applications in engineering, logistics, management, etc. He has published over 750 research articles with over 3000 fully independent and over 7200 Google Scholar citations. His h-index is 40. His main results are: the concept of rule interpolation in sparse fuzzy models, and hierarchical interpolative fuzzy systems, fuzzy Hough transform; fuzzy signatures fuzzy situational maps, and fuzzy signature state machines, and the node reduction algorithm in Fuzzy Cognitive Maps, the Bacterial Memetic Evolutionary Algorithm and the Discrete Bacterial Memetic Evolutionary Algorithm (for NP-hard continuous and discrete optimization and search), among others. His research interests include applications of CI for telecommunication, transportation and logistics, vehicles and mobile robots, control, built environment evaluation, and maintenance problems, information retrieval, employee attitude evaluation, management system investigation, etc.

IEEE CEC 2022

Yaochu Jin

Bielefeld University, Germany

July 19th, 2022

Privacy-Preserving Federated Bayesian Evolutionary Optimization

Abstract

Privacy preservation is a key concern in distributed machine learning and collective decision-making. This talk begins with a quick introduction to Bayesian optimization and Bayesian evolutionary optimization. Then, we briefly discuss existing ideas for privacy-preserving Gaussian process modelling and Bayesian optimization, followed by a presentation of two recently proposed federated evolutionary Bayesian optimization algorithms for single- and multi-objective optimization, in which a global acquisition function is estimated and optimized without requiring to transmit data from local clients. Empirical results show that the federated Bayesian optimization algorithms perform comparably well with centralized Bayesian optimization, while being able to preserve the data privacy.

Short Bio

Yaochu Jin is an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research, with the Faculty of Technology, Bielefeld University, Germany. He is also a Surrey Distinguished Chair, Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K. He was a “Finland Distinguished Professor” of University of Jyväskylä, Finland, “Changjiang Distinguished Visiting Professor”, Northeastern University, China, and “Distinguished Visiting Scholar”, University of Technology Sydney, Australia. His main research interests include evolutionary optimization and learning, trustworthy machine learning and optimization, and evolutionary developmental AI. Prof Jin is presently the Editor-in-Chief of Complex & Intelligent Systems. He was the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems, an IEEE Distinguished Lecturer in 2013-2015 and 2017-2019, the Vice President for Technical Activities of the IEEE Computational Intelligence Society (2015-2016). He is the recipient of the 2018 and 2021 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, and the 2015, 2017, and 2020 IEEE Computational Intelligence Magazine Outstanding Paper Award. He was named by the Web of Science as “a Highly Cited Researcher” consecutively from 2019 to 2021. He is a Member of Academia Europaea and Fellow of IEEE.

Jose Lozano

Basque Center for Applied Mathematics (BCAM), Spain

July 20th, 2022

A Bayesian approach as an alternative to null hypothesis statistical testing for analyzing optimization experiments

Abstract

In this talk we initially analyze null hypothesis statistical testing, the use of p-values and the controversy around them. Departing from their weaknesses and missuses we provide an alternative based on a Bayesian approach. Particularly we propose a Bayesian method to deal with ranking data generated from optimization experiments. The proposed method provides much richer information than that produced for statatistical testing such as quantifying the uncertainty in the comparison. This allows to take much more informative decisions and to carry out deeper analysis. While we illustate the methodology by means of data coming from the results of optimization algorithms, data from classifiers or other machine learning algorithms are susceptible of being analyzed.

Short Bio

Jose A. Lozano received his M.Sc. degree in mathematics and PhD in computer science from the University of the Basque Country UPV/EHU, in Spain, in 1992 and 1998 respectively. He has been a full professor at the University of the Basque Country since 2008 where he leads the Intelligent Systems Group. Since January 2019 he is the scientific director of the Basque Center for Applied Mathematics (BCAM) and fellow of the IEEE. Prof. Lozano has authored more than 150 ISI journal papers some of them have become highly cited papers. He has supervised 24 PhD theses with many receiving awards in national competitions. He has also received (with his students) several “best paper awards” at different international conferences. His current research interests include combinatorial optimization, machine learning and its synergies. Particularly in combinatorial optimization he is interested in the development of theories and algorithms that allow an improved optimization. In the machine learning area his interests are weakly supervised classification, time series analysis and Bayesian inference among others. Prof. Lozano has served on the organizing and program committee of over 70 international conferences being the general chair of IEEE Congress on Evolutionary Computation (IEEE CEC 2017) and the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2021) and the editor-in-chief of the Genetic and Evolutionary Computation Conference (GECCO 2020). He also serves (or has served) as Associate Editor of top journals in the field such as IEEE Trans. on Evolutionary Computation, Evolutionary Computation Journal and IEEE Trans. on Neural Network and Learning Systems, to name but a few.

Darrell Whitley

Colorado State University

July 21st, 2022

Recombination, Lattices of Optima and Big Valleys

Abstract

A popular hypothesis suggests that the local optima of many combinatorial optimization problems are organized into “big valleys” and “funnels.” New proofs show that many local optima and basins of attraction are connected by overlapping hypercube lattice structures under genetic recombination. A single lattice can contain exponentially many local optima. A deterministic form of Partition Crossover can return both the best and worst solutions which lie at opposite ends of a lattice in O(n) time. Given two parents that are local optima, all of the solutions in a lattice must be local optima in the largest hyperplane subspace containing the two parents. These results hold for all k-bounded pseudo-Boolean functions, as well as for classic problems such as the Traveling Salesman Problem. For all k-bounded pseudo-Boolean functions we know how to compute the location of improving moves on average in O(1) time, making random mutation obsolete. Thus, we have very fast methods for locating a sample of local optima, and then deterministically mapping the location of other local optima in the search space. Empirical evidence suggests these lattices connect to the global optimum.

Short Bio

Prof. Darrell Whitley is an ACM Fellow who has been active in Evolutionary Computation since 1986. He introduced the first “steady state genetic algorithm” with rank-based selection, the GENITOR algorithm. He has worked on dozens of real world applications of evolutionary algorithms, including satellite scheduling. This technology is still being used today for satellite resource scheduling. He introduced some of the first applications of evolutionary algorithms to reinforcement learning problems. He made theoretical contributions to the field, including the Sharpened and Focused No Free Lunch theorems. He has also shown it is possible to compute the location of improving moves in constant time for various NP-Hard combinatorial problems, thus making most forms of random mutation unnecessary. He served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM SIGEVO from 2007 to 2011. He is a professor of Computer Science at Colorado State University, and served as chair from 2003 to 2018.

Emma Hart

Edinburgh Napier University

July 22nd, 2022

An Evolutionary Approach to the Autonomous Design and Fabrication of Robots in Unknown Environments

Abstract

Robot design is traditionally the domain of humans – engineers, physicists, and increasingly AI experts. However, if the robot in intended to operate in a completely unknown environment (for example clean up inside a nuclear reactor) then it is very difficult for human designers to predict what kind of robot might be required. Evolutionary computing is a well-known technology that has been applied in various aspects of robotics for many years, for example to design controllers or body-plans. When coupled with advances in materials and printing technologies that allow rapid prototyping in hardware, it offers a potential solution to the issue raised above, for example enabling colonies of robots to evolve and adapt over long periods of time while situated in the environment they have to work in. However, it also brings new challenges, from both from an algorithmic and engineering perspective.


The additional constraints introduced by the need for example to manufacture robots autonomously, to explore rich morphological search-spaces and develop novel forms of control require some re-thinking of “standard’ approaches in evolutionary computing, particularly on the interaction between evolution and individual learning. I will discuss some of these challenges and propose and showcase some methods to address them that have been developed in during the ARE project. Finally, I will touch on some ethical issues associated with the notion of autonomous robot design, and discuss the potential of artificial evolution to be used as a tool to gain new insights into biological evolving systems.

Short Bio

Professor Emma Hart has worked in the field of Evolutionary Computing for over 20 years on a range of applications ranging from combinatorial optimisation to robotics, where the latter includes robot design and swarm robotics. Her current work is mainly centred in Evolutionary Robotics, bringing together ideas on using artificial evolution as tool for optimisation with research that focuses on how robots can be made to continually learn, improving performance as they gather information from their own or other robots’ experiences. The work has attracted significant media attention including recently in the New Scientist, and the Guardian. She gave a TED talk on this subject at TEDWomen in December 2021 in Palm Springs, USA which has attracted over 1 million views since being released online in April 2022. She is the Editor-in-Chief of the journal Evolutionary Computation (MIT Press) and an elected member of the ACM SIG on Evolutionary Computing. In 2022, she was honoured to be elected as a Fellow of the Royal Society of Edinburgh for her contributions to the field of Computational Intelligence.

Gabriela Ochoa

University of Stirling, Scotland, UK

July 23rd, 2022

Optimization Trajectories and Landscapes: A Complex Networks View

Abstract

We present our recent findings and visual maps (static, animated, 2D and 3D) characterising computational search spaces. Many natural and technological systems are composed of a collection of interconnected units; examples are neural networks, social networks and the Internet. A key approach to capture the global properties of such systems is to model them as graphs whose nodes represent the units, and whose links stand for the interactions between them. This simple, yet powerful concept has been used to study a variety of complex systems where the goal is to analyse the connectivity pattern in order to understand the behaviour of the system. This talk overviews local optima networks (LONs), a network-based model of fitness landscapes where nodes are local optima and edges are possible search transitions among these optima. We also introduce search trajectory networks (STNs) as a tool to analyse and visualise the behaviour of metaheuristics. STNs model the search trajectories of algorithms. Unlike LONs, nodes are not restricted to local optima but instead represent representative states of the search process. Edges represent search progression between consecutive states. This extends the power and applicability of network-based models. Both LONs and STNs allow us to visualise realistic search spaces in ways not previously possible and bring a whole new set of quantitative network metrics for characterising and understanding computational search.

Short Bio

Gabriela Ochoa is a Professor of Computing Science at the University of Stirling in Scotland, UK. Her research lies in the foundations and applications of evolutionary algorithms and metaheuristics, with emphasis on autonomous search, fitness landscape analysis and visualisation. She holds a PhD from the University of Sussex, UK, and has held academic and research positions at the University Simon Bolivar, Venezuela, and the University of Nottingham, UK. Her recent work on network-based models of computational search enhances their descriptive and visualisation power, producing a number of publications including 4 best-paper awards and 8 other nominations at leading venues. She collaborates cross-disciplines to apply evolutionary computation in healthcare and conservation. She has been active in organisation and editorial roles within leading Evolutionary Computation outlets including the IEEE Congress on Evolutionary Computation (CEC) and the IEEE Transactions on Evolutionary Computation. In 2020, she was recognised by the leading European event on bio-inspired algorithms, EvoStar, for her outstanding contributions to the field.

IJCNN 2022

Hava Siegelmann

University of Massachusetts

July 19th, 2022

Super Turing Computing Enables Lifelong Learning AI

Abstract

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its ongoing experience to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail.

A main reason to have AI frozen once it is fielded, is that it is built on the Turing machine paradigm, where a fixed program is loaded to the universal machine which then follows the program’s instructions. But – another theory of computation – the Super Turing computation enables more advanced type AI, one that its expertise is not dependent solely on its training set, and where computing and learning advance hand in hand.

Lifelong Learning is the cutting edge of artificial intelligence – encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. The presentation will describe the state of the art and suggest directions to further advanced the field.

Short Bio

Dr. Siegelmann is an internationally known professor of Computer Science and a recognized expert in neural networks. She is core member of the Neuroscience and Behavior Program, director of the Biologically Inspired Neural and Dynamical Systems (BINDS) Laboratory, and a Provost professor at the University of Massachusetts. She is particularly known for her groundbreaking work in computing beyond the Turing limit, and for achieving advanced learning capabilities through a new type of Artificial Intelligence: Lifelong Learning. In addition to research in this field, Dr. Siegelmann was the founding manager of the Lifelong Learning Machines at DARPA. Siegelmann conducts highly interdisciplinary research in next generation machine learning, neural networks, intelligent machine-human collaboration, computational studies of the brain – with application to AI, data science and high-tech industry. She is a leader in increasing awareness of ethical AI and in supporting minorities and women in AI and STEM fields all over the world. Siegelmann has been a visiting professor at MIT, Harvard University, the Weizmann Institute, ETH, the Salk Institute, Mathematical Science Research Institute Berkeley, and the Newton Institute Cambridge University. Her list of awards includes the Obama Presidential BRAIN Initiative award, the Donald O. Hebb Award of the International Neural Network Society (INNS) for “contribution to biological learning”; she was named a Distinguished Lecturer of the IEEE Computational Intelligence Society and was given DARPA’s Meritorious Public Service award. Siegelmann is an IEEE Fellow and INNS Fellow.

Gitta Kutyniok

Mathematisches Institut der Universität München

July 20th, 2022

Reliable AI: Successes, Challenges, and Limitations

Abstract

Artificial intelligence is currently leading to one breakthrough after the other, both in public life with, for instance, autonomous driving and speech recognition, and in the sciences in areas such as medical diagnostics or molecular dynamics. However, one current major drawback is the lack of reliability of such methodologies. In this lecture we will take a mathematical viewpoint towards this problem, showing the power of such approaches to reliability. We will first provide an introduction into this vibrant research area, focussing specifically on deep neural networks. We will then survey recent advances, in particular, concerning generalization guarantees and explainability, touching also upon the setting of graph neural networks. Finally, we will discuss fundamental limitations of deep neural networks and related approaches in terms of computability, which seriously affects their reliability.

Short Bio

Gitta Kutyniok (https://www.ai.math.lmu.de/kutyniok) currently has a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis, and was a Nachdiplomslecturer at ETH Zurich in 2014. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok holds an Adjunct Professorship in Machine Learning at the University of Tromso since 2019. Gitta Kutyniok has received various awards for her research such as an award from the Universität Paderborn in 2003, the Research Prize of the Justus-Liebig Universität Gießen and a Heisenberg-Fellowship in 2006, and the von Kaven Prize by the DFG in 2007. She was invited as the Noether Lecturer at the ÖMG-DMV Congress in 2013, the Hans Schneider ILAS Lecturer at IWOTA in 2016, a plenary lecturer at the 8th European Congress of Mathematics (8ECM) in 2021, and the lecturer of the London Mathematical Society (LMS) Invited Lecture Series in 2022. She was also honored by invited lectures at both the International Congress of Mathematicians 2022 (ICM 2022) and the International Congress on Industrial and Applied Mathematics (ICIAM 2023). Moreover, she became a member of the Berlin-Brandenburg Academy of Sciences and Humanities in 2017, a SIAM Fellow in 2019, and a Simons Fellow at the Isaac Newton Institute in 2021; she received an Einstein Chair at TU Berlin in 2008, a Francqui Chair of the Belgian Francqui Foundation in 2020, and holds the first Bavarian AI Chair at LMU from 2020 on. She was Chair of the SIAM Activity Group on Imaging Sciences from 2018-2019 and Vice Chair of the new SIAM Activity Group on Data Science in 2021, and currently serves as Vice President-at-Large of SIAM. She is also the main coordinator of the Research Focus “Next Generation AI” at the Center for Advanced Studies at LMU from 2021 to 2023, serves as LMU-Director of the ONE MUNICH Strategy Forum Project on “Next generation Human-Centered Robotics: Human embodiment and system agency in trustworthy AI for the Future of Health”, and acts as current Co-Director of the Konrad Zuse School of Excellence in Reliable AI (relAI) in Munich. Gitta Kutyniok’s research work covers, in particular, the areas of applied and computational harmonic analysis, artificial intelligence, compressed sensing, deep learning, imaging sciences, inverse problems, and applications to life sciences, robotics, and telecommunication.

Anna Monreale

University of Pisa

July 21st, 2022

The relationship between Explainability & Privacy in AI

Abstract

In recent years we are witnessing the diffusion of AI systems based on powerful machine learning models which find application in many critical contexts such as medicine, financial market, credit scoring, etc. In such contexts it is particularly important to design Trustworthy AI systems while guaranteeing interpretability of their decisional reasoning, and privacy protection and awareness. In this talk we will explore the possible relationships between these two relevant ethical values to take into consideration in Trustworthy AI. We will answer research questions such as: how explainability may help privacy awareness? Can explanations jeopardize individual privacy protection?

Short Bio

Anna Monreale is Associate Professor at the Computer Science Department of the University of Pisa and a member of the KDD LAB. She has been a visiting student at the Department of Computer Science of the Stevens Institute of Technology (Hoboken, New Jersey, USA) (2010). Her research interests include big data analytics, social networks analysis and the study of privacy and ethical issues rising in learning AI models from these kinds of social and human sensitive data. In particular,  she is interested in the evaluation of privacy risks during analytical processes, in the definition of privacy-by-design technologies in the era of big data, and in the definition of methods for explaining black box decision systems. She earned her Ph.D. in Computer Science from the University of Pisa in June 2011 and her dissertation was about privacy-by-design in data mining.  

Derong Liu

Guangdong University of Technology

July 22nd, 2022

Advances of Adaptive Dynamic Programming and Reinforcement Learning for Optimal Control

Abstract

Researchers have been searching for novel control methods to handle the complexity of modern industrial processes. Artificial intelligence and especially machine learning approaches might provide a solution for the next generation of control methodologies that can handle the level of complexities in many modern industrial processes. It has been shown by many researchers adaptive dynamic programming and reinforcement learning can do a very good job approximating optimal control actions and provide a nearly optimal solution for the control of complex nonlinear systems. It requires a combination of function approximation structures such as neural networks and optimal control techniques such as dynamic programming. This lecture covers recent development in ADPRL for optimal control of complex dynamical systems.

Short Bio

Derong Liu (S’91–M’94–SM’96–F’05) received the B.S. degree in mechanical engineering from East China Institute of Technology (now Nanjing University of Science and Technology), Nanjing, China, in 1982, the M.S. degree in automatic control theory and applications from Institute of Automation, Chinese Academy of Sciences, Beijing, China, in 1987, and the Ph.D. degree in electrical engineering from University of Notre Dame, Indiana, USA, in 1994. Dr. Liu was a Product Design Engineer with China North Industries Corporation, Jilin, China, from 1982 to 1984. He was an Instructor with the Graduate School of Chinese Academy of Sciences, Beijing, from 1987 to 1990. He was a Staff Fellow with General Motors Research and Development Center, from 1993 to 1995. He was an Assistant Professor with the Department of Electrical and Computer Engineering, Stevens Institute of Technology, from 1995 to 1999. He joined the University of Illinois at Chicago in 1999, and became a Full Professor of Electrical and Computer Engineering and of Computer Science in 2006. He was selected for the “100 Talents Program” by the Chinese Academy of Sciences in 2008, and he served as the Associate Director of The State Key Laboratory of Management and Control for Complex Systems at the Institute of Automation, from 2010 to 2016. He is now a Full Professor with the School of Automation, Guangdong University of Technology. He has published 13 books and 260 papers in international journals. Dr. Liu was elected three times AdCom member of the IEEE Computational Intelligence Society in 2006, 2015, and 2022, respectively. He was the Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, from 2010 to 2015. He was elected twice Distinguished Lecturer of the IEEE Computational Intelligence Society in 2012 and 2016, respectively. He served as a Member of the Council of International Federation of Automatic Control from 2014 to 2017 and he served as the President of Asia Pacific Neural Network Society in 2018. He was the General Chair of 2014 IEEE World Congress on Computational Intelligence, the General Chair of 2016 World Congress on Intelligent Control and Automation, and the General Chair of 2017 International Conference on Neural Information Processing. Dr. Liu received the Faculty Early Career Development Award from the National Science Foundation in 1999, the University Scholar Award from University of Illinois from 2006 to 2009, the Overseas Outstanding Young Scholar Award from the National Natural Science Foundation of China in 2008, and the Outstanding Achievement Award from Asia Pacific Neural Network Assembly in 2014. He received the International Neural Network Society’s Gabor Award in 2018; the IEEE Transactions on Neural Networks and Learning Systems Outstanding Paper Award in 2018; the IEEE Systems, Man and Cybernetics Society Andrew P. Sage Best Transactions Paper Award in 2018; the IEEE/CCA J. Automatica Sinica Hsue-Shen Tsien Paper Award in 2019. He is recipient of the IEEE CIS Neural Network Pioneer Award in 2022. He has been named a highly cited researcher consecutively for five years from 2017 to 2021 by Clarivate. He was a plenary/keynote speaker at 32 international conferences. Currently, he is the Editor-in-Chief of Artificial Intelligence Review, the Deputy Editor-in-Chief of the IEEE/CAA Journal of Automatica Sinica, the Deputy Editor-in-Chief of the CAAI Transactions on Artificial Intelligence, and the Chair IEEE Guangzhou Section. He is a Fellow of the IEEE, a Fellow of the International Neural Network Society, a Fellow of the International Association for Pattern Recognition, and a Member of Academia Europaea (The Academy of Europe).

Leandro Minku

University of Birmingham, UK

July 23rd, 2022

Overcoming the Challenge of Limited Labeled Data in Online Data Stream Learning

Abstract

The volume and incoming speed of data have increased tremendously over the past years. Data frequently arrive continuously over time in the form of streams, rather than forming a single static data set. Therefore, data stream learning, which is able to learn incoming data upon arrival, is an increasingly important approach to extract knowledge from data. Data stream learning is a challenging task, because the underlying probability distribution of the problem is typically not static; instead, it suffers changes over time. Such challenge is exacerbated by the fact that, even though the rate of incoming examples may be very large, only a small portion of these examples may arrive as labeled examples for training, due to the high cost of the labelling process. In this talk, I will present a novel online semi-supervised data stream neural network to cope with these issues. I will also discuss further research directions to tackle these and other challenges posed by real world data stream learning applications.

Short Bio

Dr. Leandro L. Minku is an Associate Professor at the School of Computer Science, University of Birmingham (UK). Prior to that, he was a Lecturer in Computer Science at the University of Leicester (UK). He received the PhD, MSc and BSc degrees in Computer Science from the University of Birmingham (UK) in 2010, from the Federal University of Pernambuco (Brazil) in 2006, and from the Federal University of Parana (Brazil) in 2003, respectively. Dr. Minku’s main research interests are machine learning in non-stationary environments / data stream mining, online class imbalance learning, ensembles of learning machines and computational intelligence for software engineering. His work has been published in internationally renowned venues both in the computational intelligence and software engineering fields, such as IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Software Engineering and ACM Transactions on Software Engineering and Methodology. Among other roles, Dr. Minku is Associate Editor-in-Chief for Neurocomputing, Senior Editor for IEEE Transactions on Neural Networks and Learning Systems, and Associate Editor for Empirical Software Engineering Journal and Journal of Systems and Software. He was the General Chair for the International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE 2019 and 2020), and Co-chair for the Artifacts Evaluation Track of the International Conference on Software Engineering (ICSE 2020). Dr. Minku is a Senior Member of the IEEE and has served on various roles in the IEEE Computational Intelligence Society, including roles in the Member Activities, Conference and Education Committees.

Industry Day

Marco Landi

Institut EuropIA

July 20th, 2022

Abstract

As AI is becoming more impacting in our lives we need to secure that the wealth created can be shared and we do not lose control on our own destiny.

Short Bio

Marco LANDI President chez QuestIt Summary A leader with long experience in global hi-tech business. Marco has been COO of Apple Computer in Cupertino responsible for Global Operations, Marketing and Sales after a successful turnaround of Apple EMEA activities as President of Apple Europe. Previously he spent over 20 years at Texas Instruments managing all business units in EMEA based in Brussels and ASIA based in Hong Kong. During his 3 years tenure, TI Asia increased its revenues from $1B to over $4B. In Brussels he won the prestigious European Quality Award and was appointed Chairman of the American-European Electronics Association representing over 300 thousands workers at the European Commission in Brussels.