We are glad to announce the tutorials that have been accepted for WCCI 2022.

Conference | Title | Tutorial abstract |
---|---|---|

CEC | Towards Better Explainable AI Through Genetic Programming | Although machine learning has achieved great success in many real-world applications, it is criticised as usually behaving like a black box, and it is often difficult, if not impossible, to understand how the machine learning system makes the decision\/prediction. This could lead to serious consequences, such as the accidents of the Tesla automatic driving cars, and biases of the automatic bank loan approval systems.\r\n\r\nTo address this issue, Explainable AI (XAI) is becoming a very hot research topic in the AI field due to its urgent needs in various domains such as finance, security, medical, gaming, legislation, etc. There have been an increasing number of studies on XAI in recent years, which improves the current machine learning systems from different aspects.\r\n\r\nIn evolutionary computation, Genetic Programming (GP) has been successfully used in various machine learning tasks including classification, symbolic regression, clustering, feature construction, and automatic heuristic design. As a symbolic-based evolutionary learning approach, GP has an intrinsic great potential to contribute to XAI, as a GP model tends to be interpretable.\r\n\r\nThis tutorial will give a brief introduction on the common approaches in XAI, such as attention map, post-hoc explanation (LIME, SHAP), visualisation, and then introduce how to approach better model interpretability through GP, including multi-objective GP, simplification in GP, different representations in GP, and post-hoc explanation using GP. Finally, we will discuss the current trend, challenges and potential future research directions. |

CEC | Evolutionary Machine Learning for Combinatorial Optimisation | Combinatorial optimisation such as scheduling and resource allocation for cloud\/grid\/high performance computing, has attracted attention from both academics and industry due to its practical value. Evolutionary machine learning, e.g., evolutionary computation techniques including evolutionary algorithms, has been widely used to handle combinatorial optimisation problems. Instead of manually designing heuristics for a specific problem instance, evolutionary machine learning has also been successfully used as hyper-heuristics approaches to select or generate heuristics for a class of problem instances, especially in dynamic problems, such as learning scheduling heuristics for dynamic job shop scheduling.\r\n\r\nThis tutorial will provide a comprehensive introduction to evolutionary machine learning techniques for combinatorial optimisation problems. This tutorial will cover different types of (advanced) evolutionary machine learning approaches for job shop scheduling and resource allocation in cloud computing. By listening to this tutorial, you are expected to get familiar with evolutionary machine learning in four aspects. First, you will learn the definition of hyper-heuristic learning, and the similarities and differences between heuristic learning and hyper-heuristic learning. Second, you will understand different combinatorial optimisation problems well. The details of job shop scheduling (e.g., static, dynamic, flexible job shop scheduling) and resource allocation in cloud computing will be given. Third, how to use evolutionary algorithms as hyper-heuristic approaches to learn heuristics for combinatorial optimisation will be introduced with examples. Last, this tutorial will show how to use advanced machine learning techniques such as feature selection, surrogate and multitask learning with genetic programming to combinatorial optimisation problems by taking job shop scheduling as an example. |

CEC | Evolutionary Algorithms and Hyper-Heuristics | Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics and is continuing to do so. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of evolutionary algorithm hyper-heuristic. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. A new measure to assess the performance of hyper-heuristic performance will also be presented. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. The tutorial will also examine recent trends in evolutionary algorithm hyper-heuristics such as transfer learning and automated design. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics. |

CEC | How to Evaluate the Outcome of Multi-Objective Optimisation Algorithms | When working out an exciting multi-objective algorithm, have you ever been puzzled about how to assess it? When applying modern multi-objective optimisation techniques to your specific application problem, have you ever been baffled on how to select right quality indicators to evaluate the experimental results? When using different evaluation indicators, have you ever been confused by inconsistent results (e.g. obtained by Hypervolume and IGD)? \r\n\r\nIn this tutorial, we will address these issues. We start off by explaining why we need evaluation indicators in multi-objective optimisation, and then what evaluation results actually mean. Then, we introduce example evaluation indicators as well as their behaviours and preferences, followed by an answer to what makes an ideal indicator. Afterwards, we provide a detailed guidance on how to select and use proper quality indicators in various situations. Lastly, we conclude the tutorial by suggesting some future research directions. \r\n\r\nThis tutorial is designed for a wide audience. You are not expected to have deep understanding of multi-objective optimisation. |

CEC | Pareto Optimization for Subset Selection: Theories and Practical Algorithms | Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the generated non-dominated solution set. Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection. |

CEC | Evolutionary Continuous Dynamic Optimization | Many real-world optimization problems are dynamic. The field of evolutionary dynamic optimization deals with such problems where the search space changes over time. This Tutorial is dedicated to exploring the recent advances in the field of evolutionary continuous dynamic optimization. This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in continuous dynamic optimization problems. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in continuous dynamic optimization problems. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the Evolutionary Computation research community. |

CEC | Statistical Analyses for Multi-objective Stochastic Optimization Algorithms | Moving to the era of explainable AI, a comprehensive comparison of the performance of multi-objective stochastic optimization algorithms has become an increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed for acquiring relevant and valid conclusions. First of all, the selection of the performance measures (i.e., quality indicators) should be done with a great care since some measures can be correlated and their data is then further involved into statistical analyses. Further, the statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Next, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data. The analysis is even more complicated because the selection of quality indicators as performance measures for multi-objective optimization algorithms can be biased to the user's preference.\r\n\r\nMotivated by the great success of the tutorial entitled “Statistical Analyses for Meta-heuristic Stochastic Optimization Algorithms”, which was focused on single-objective optimization, with approximately 45 participants at IEEE CEC 2021, GECCO 2021 and GECCO 2020 and its sibling tutorial at PPSN 2020 with over 40 participants, we hereby apply to organize a similar tutorial at IEEE CEC 2022, where the focus will be on multi-objective optimization. |

CEC | Introduction into Matrix Adaptation Evolution Strategies | Since the recent successes of Evolution Strategies (ES) in the field \r\nof Reinforcement Learning, ESs have attracted interest also outside \r\nthe ES community. Matrix Adaptation Evolution Strategies are regarded \r\nas state-of-the-art in evolutionary real-valued parameter optimization. \r\nThese strategies are able to learn correlations between the different\r\ndecision variables in order to generate suitable mutations that allow \r\nfor an efficient approximation of the optimizer of the problem to be \r\noptimized. Until recently, correlation learning has been realized by \r\ncovariance matrix adaptation (CMA). Only recently it has been shown \r\nthat there is no need to learn the covariance matrix. Instead a mutation \r\nmatrix can be learned directly. As a result, the algorithms get simpler \r\nand can be easily modified to also tackle large scale optimization \r\nproblems and constrained optimization problems. One such ES was the \r\nwinner in the 2018 CEC constrained optimization benchmark competition \r\nregarding the high-dimensional problem instances.\r\n\r\nThis tutorial provides a gentle introduction into Matrix Adaptation\r\nEvolution Strategies (MA-ES) explaining the design principles.\r\nBeing based on these principles, it will be shown how one can \r\nmodify the MA-ES in order to \r\na) incorporate self-adaptive behavior,\r\nb) handle large scale optimization problems with thousands of variables\r\nc) treat constrained optimization problems \r\n\r\nThe tutorial will also show some 2D and 3D graphical demonstrations of \r\nthe working of MA-ES handling restricted optimization problems with \r\nnon-linear equality and inequality constraints.\r\n\r\nThis tutorial is intended as an introduction, the target audience is \r\nnot restricted to EC specialist. The topic could be of interest for \r\nFuzzy and Neural Network specialist as well. No special mathematical \r\nbackground is needed to understand this presentation. |

CEC | Network analysis and evolutionary dynamics on graphs | Suppose the evolutionary dynamics of a population of individuals is substantially influenced by the relationships between the individuals. Then, the relations between the individuals form a network and a natural mathematical description of their dynamics is by elements of network science, and particularly by evolutionary graph theory. This tutorial gives an introduction to graph-oriented modelling and analysis of evolutionary processes. Based on a review of terminology and basic concepts in network analysis, graph-theoretical results are discussed which are relevant for analysing and designing evolutionary processes. As network analysis and design has become an indispensable approach in life sciences, the tutorial not only discusses approach and results, but also intends to foster the migration of results from network science and graph theory to the field of evolutionary computation. |

CEC | Evolutionary Many-Objective Optimization | Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, one of the hottest research topics is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. The increase in the number of objectives significantly makes multi-objective problems difficult. The goal of the tutorial is to clearly explain difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. |

CEC | Evolutionary Feature Reduction for Machine Learning | In the era of big data, vast amounts of high-dimensional data have become ubiquitous in various domains, such as social media, healthcare, and cybersecurity. Training machine learning algorithms on such high-dimensional data is not practical due to the curse of dimensionality. Furthermore, the high-dimensional data might contain redundant and\/or irrelevant features that blur useful information from relevant features. Feature reduction can address the above issues by building a smaller but more informative feature set.\r\n\r\nFeature selection (FS) and feature construction (FC) are two main approaches to feature reduction. FS aims to select a small subset of original (relevant) features. FC aims to create a small set of new high-level (informative) features based on the original feature set. Although both approaches are essential pre-processing steps, they are challenging due to their large and complex search spaces. While exhaustive searches are impractical due to their intensive computational cost, traditional heuristic searches require less computational resources but can be trapped at local optima. Evolutionary computation (EC) has been widely applied to achieve feature reduction because of its potential global search ability. Existing EC-based feature reduction approaches successfully reduce the data dimensionality and improve the classification performance and interpretability of the built models. \r\n\r\nThis tutorial firstly introduces the main concepts and the general framework of feature reduction. Then, we will show how EC techniques, such as particle swarm optimisation, genetic programming, ant colony optimisation, and evolutionary multi-objective optimisation, can address challenges in feature reduction. The effectiveness of EC-based feature reduction is illustrated through several applications such as bioinformatics, image analysis and pattern classification, and cybersecurity. The tutorial concludes with existing challenges for future research. |

CEC | Benchmarking and analyzing iterative optimization heuristics with IOHprofiler | Evaluating sampling-based optimization algorithms via sound benchmarking is a core concern in evolutionary computation. IOHprofiler supports researchers in this task by providing an easy-to-use, interactive, and highly customizable environment for benchmarking any iterative optimizer. The experimenter module provides easy access to common problem sets (e.g. BBOB functions) and modular logging functionality that can be easily combined with other optimization functions. The resulting logs (and logs from other platforms, e.g. COCO and Nevergrad) are fully interoperable with the IOHanalyzer, which provides access to highly interactive performance analysis, in the form of a wide array of visualizations and statistical analyses. A GUI, hosted at https:\/\/iohanalyzer.liacs.nl\/ makes these analysis tools easy to access. Data from many repositories (e.g. COCO, Nevergrad) are pre-processed, such that the effort required to compare performance to existing algorithms is greatly reduced. \r\n\r\nThis tutorial will introduce the motivation of the IOHprofiler project and its core functionality. The key features and components will be highlighted and demonstrated by the organizers. Afterward, there will be a hands-on session where the usage of the various analysis functionalities is demonstrated. Guided examples will be provided to highlight the many aspects of algorithm performance which can be explored using the interactive GUI. For this session, no software installation is necessary. |

CEC | Principle and Applications of Semantic GP | Semantic genetic programming is a rapidly growing research track of Genetic Programming (GP). Semantic GP incorporates semantic awareness into GP and explicitly uses more information on the behaviour of programs in the search. When evaluating a program, semantic GP characterises it with a vector of outputs instead of a single scalar fitness value. Research has demonstrated the successfulness of additional behavioural information to facilitate the design of a more effective GP search. In addition, the geometric properties of the semantic space lead to more attractive search operators with better theoretical characteristics. With the geometric information of semantics, the GP dynamics are easier to understand and interpret. Inappropriate behaviours are easier to prevent. All these contribute to making GP a more informed and intelligent method. This tutorial will give a comprehensive overview of semantic GP methods. We will review various ways of integrating semantic awareness in the evolutionary process of GP. In particular, we will introduce geometric semantic GP and review its formal geometric semantic framework, and analyse the theoretical properties of the fitness landscape under this framework. This will be followed by a review of many novel developments of provably good semantic genetic operators. Another aspect is the efficient implementation of semantic search operators, which is still challenging. We will illustrate efficient and concise implementations of these operators. Another focus of this tutorial is to stimulate the audience by showing some promising applicative results that have been obtained so far in many applications of semantic GP including many symbolic regression and classification tasks in the areas of healthcare, civil engineering, natural language processing and so on. We will also identify and discuss current challenges and promising future directions in semantic GP with the hope of motivating new and stimulating contributions. |

CEC | How to Compare Evolutionary Multi-Objective Optimization Algorithms: Parameter Specifications, Indicators and Test Problems | Evolutionary multi-objective optimization (EMO) has been a very active research area in recent years. Almost every year, new EMO algorithms are proposed. When a new EMO algorithm is proposed, computational experiments are usually conducted in order to compare its performance with existing algorithms. Then, experimental results are summarized and reported as a number of tables together with statistical significance test results. Those results usually show higher performance of the new algorithm than existing algorithms. However, fair comparison of different EMO algorithms is not easy since the evaluated performance of each algorithm usually depends on experimental settings. This is also because solution sets instead of solutions are evaluated. \r\nIn this tutorial, we will first explain some commonly-used software platforms and experimental settings for the comparison of EMO algorithms. Then, we will discuss how to specify the common setting of computational experiments, which is used by all the compared EMO algorithms. More specifically, the focus of this tutorial is the setting related to the following four issues: (i) termination condition, (ii) population size, (iii) performance indicators, (iv) test problem. For each issue, we will provide a clear demonstration of its strong effects on comparison results of EMO algorithms. Following that, we will discuss how to handle each of these issues for fair comparison. These discussions aim to encourage the future development of the EMO research field without focusing too much on the development of overly-specialized new algorithms in a specific setting. Finally, we will also suggest some promising future research topics related to each issue. |

CEC | Learn to Optimize | The huge success of machine learning, with AlphaGo, AlphaStar, and AlphaFold hailed as milestones, has boosted lots of enthusiasm in the Artificial Intelligence (AI) community on whether machine learning could achieve further breakthrough in other AI-related domains. Optimization problems might be among the first ones that came into mind, for their tight relationship with machine learning and wide applications in the real world. On the other hand, the concept of integrating learning and optimization has been a long-standing theme of research in many sub-areas of computational intelligence, under the name of algorithm selection, parameter tuning, automated algorithm configuration, hyper-heuristic, transfer optimization, etc. This tutorial will first review these relevant topics in a unified framework dubbed “Learn to Optimize” (LTO). We hope such a review would provide a big picture to the following questions which might be of interest to the whole AI community:\r\n1.\tSo far, how good could a machine solve optimization problems by learning from past experience.\r\n2.\tHow to apply LTO technology in practice (if they could be used)?\r\nBased on the review, latest research that falls in the scope of LTO, such as automatic construction approaches for parallel algorithm portfolios and reinforcement learning for neural solvers, will be introduced with more details. |

CEC | Differential Evolution with Ensembles, Adaptations and Topologies | Differential evolution (DE) is one of the most successful numerical optimization paradigms. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm. DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2022 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE. |

CEC | Landscape Analysis of Optimisation Problems and Algorithms | The notion of a fitness landscape was first introduced in 1932 to understand natural evolution, but the concept was later applied in the context of evolutionary computation to understand algorithm behaviour on different problems. In the last decade, the field of fitness landscapes has experienced a large upswing in research, evident in the increased number of published papers on the topic as well as regular tutorials, workshops and special sessions at all the major evolutionary computation conferences. More recently, landscape analysis has been used in contexts beyond evolutionary computation in areas such as feature selection for data mining, hyperparameter optimisation and neural network training. \r\n\r\nA further recent advance has been the analysis of landscapes through the trajectories of algorithms. The search paths of algorithms provide samples of the search space that can be seen as a view of the landscape from the perspective of the algorithm. What algorithms \see\" as they move through the search space of different problems can help us understand how evolutionary and other search algorithms behave on problems with different characteristics. \r\n\r\nThis tutorial provides an overview of landscape analysis in different contexts including techniques for understanding and characterising discrete and continuous optimisation problems applications of landscape analysis in performance prediction and algorithm selection and the analysis of search trajectories to understand the behaviour of search algorithms." |

CEC | Constraint Handling in Multiobjective Optimization | This tutorial provides an overview of the state of the art in constraint handling in multiobjective optimization. It starts with the motivation for dealing with constrained multiobjective optimization problems (CMOPs), gives their formal definition, and describes the prerequisites for and challenges in solving CMOPs. Next, it discusses constraint handling techniques (CHTs) for both single- and multiobjective optimization with an emphasis on the recently proposed techniques for multiobjective optimization. It further presents the test problems, contrasts the artificial and real-world test problems, and characterizes the problems from the perspective of constraints. It also discusses the means for assessing the performance of algorithms solving CMOPs. The tutorial concludes with a summary of the state of the art and a discussion of open issues and future research directions. |

CEC | External Archivers for Multi-objective Evolutionary Algorithms | This tutorial aims to present an overview of several archiving strategies developed over the last years dealing with approximations of the solution sets of multi-objective optimization problems by evolutionary algorithms. More precisely, we will present and analyze several archiving strategies that aim for different finite size approximations either of the set of optimal solutions (Pareto set and front) as well as the set of approximate solutions of a given optimization problem. \r\nThe convergence analysis will be done for a very broad framework that includes all existing evolutionary algorithms (along with other metaheuristics) and that will only use minimal assumptions on the process to generate new candidate solutions. As will be seen, already small changes in the design of the archivers can have significant effects on the respective limit archives. \r\nIt is important to note that all of the archivers presented here can be coupled with any set-based multi-objective search algorithm, and that the resulting hybrid method takes over the convergence properties of the used archiver. \r\nThis tutorial hence targets all algorithm designers and practitioners in the field of multi-objective optimization. We hope that the archivers can either be used to enhance their preferred search method or that they may be used as a starting point for the design of further archiving strategies that aim for different approximations of the solution set. |

CEC | Tutorial on Modern Linkage Learning Techniques in Combinatorial Optimization | Linkage learning is employed by many state-of-the-art evolutionary methods dedicated to solving problems in various domains: binary, discrete non-binary, permutation-based, continuous, and others. It has been successfully applied to solving single- and multi-objective problems. The information about underlying problem structure, discovered by linkage learning, is the key part of many state-of-the-art evolutionary methods. However, linkage learning techniques are often considered hard to understand or difficult to use. Linkage learning techniques apply to any optimization domain. However, linkage learning techniques dedicated to continuous search spaces are usually significantly different than those proposed for combinatorial problems. Therefore, this tutorial will focus on linkage learning techniques dedicated to discrete (including binary) and permutation-based search spaces. Nevertheless, for the presented techniques, we will point to their successful applications in continuous search space |

FUZZ-IEEE | Fuzzy Networks: Analysis and Design | The tutorial focuses on the analysis and design of fuzzy networks. It highlights some recent research results of the presenters as the ones from the publications listed below. Fuzzy networks are similar to neural networks in terms of general structure. However, their nodes and connections are different. The nodes of fuzzy networks are fuzzy systems represented by rule bases and the connections between the nodes are outputs from and inputs to these rule bases. In this context, apart from being a structural counterpart for a neural network, a fuzzy network is also a conceptual generalisation of a fuzzy system.\r\n\r\n[1] A.Gegov, Fuzzy Networks for Complex Systems: A Modular Rule Base Approach, Series in Studies in Fuzziness and Soft Computing (Springer, Berlin, 2011) \r\n[2] F.Arabikhan, Telecommuting Choice Modelling using Fuzzy Rule Based Networks, PhD Thesis (University of Portsmouth, UK, 2017)\r\n[3] A.Gegov, F.Arabikhan and N.Petrov, Linguistic composition based modelling by fuzzy networks with modular rule bases, Fuzzy Sets and Systems 269 (2015) 1-29\r\n[4] X.Wang, A.Gegov, F.Arabikhan, Y.Chen and Q.Hu, Fuzzy network based framework for software maintainability prediction, Uncertainty, Fuzziness and Knowledge Based Systems 27\/5 (2019) 841-862\r\n[5] A.Yaakob, A.Serguieva and A.Gegov, FN-TOPSIS: Fuzzy networks for ranking traded equities, IEEE Transactions on Fuzzy Systems 25\/2 (2016) 315-332\r\n[6] A.Yaakob, A.Gegov and S.Rahman, Fuzzy networks with rule base aggregation for selection of alternatives, Fuzzy Sets and Systems 341 (2018) 123-144 |

FUZZ-IEEE | Graded logic aggregation and its applications in decision engineering | The tutorial presents graded logic as a soft computing model of observable and measurable human reasoning. Basic logic operations identifiable in human reasoning include various symmetric and asymmetric forms of simultaneity and substitutability. The novelty of this tutorial is the development of mathematical models of soft logic aggregators from experiments with human subjects. That proves the credibility of logic aggregators. The tutorial presents the complete and continuous spectrum of graded logic aggregators from drastic conjunction to drastic disjunction. That includes hyperconjunction (aggregators with andness above 1) and hyperdisjunction (aggregators with orness above 1). Averaging aggregators (partial conjunction, neutrality, and partial disjunction) are inserted between the hyperconjunction and hyperdisjunction, providing the continuous andness-directed transition from drastic conjunction to drastic disjunction. Semantic components of human reasoning are expressed using weights and selecting annihilators. Using basic models of simultaneity and substitutability we then create asymmetric aggregators of conjunctive and disjunctive partial absorption. These models and negation are necessary in graded propositional calculus.\r\n\r\nThe second part of the tutorial is focused on applicability of graded logic. We are primarily interested in decision support systems. The fundamental pillar of decision engineering is a Standard Model of Evaluation Reasoning (SMER), derived from observations of human evaluation decision making. SMER includes the aggregation structure based on graded propositional calculus. It provides explainable decisions. The tutorial includes examples of building logic aggregation structures and complex evaluation criteria. The detailed examples show medical applications of presented methodology for evaluating patient disability, the problem of risky therapy, and the vaccination priority evaluation applied to COVID-19. |

FUZZ-IEEE | FuzzyR: Fuzzy Logic Toolkit for R | FuzzyR is a free, open-source fuzzy logic toolbox for the R programming language. Whilst keeping existing functionalities of the previous toolboxes (e.g. FuzzyToolkitUoN), the main extension of the FuzzyR toolbox is the capability to optimise type-1 and interval type-2 fuzzy inference systems based on an extended ANFIS architecture. An accuracy function is also added to provide performance indicators, featuring eight alternative accuracy measures, including a new measure UMBRAE. In addition, graphical user interfaces have been provided so that the properties of a fuzzy inference system can be visualised and manipulated. In the latest release we have made an extension of the toolbox for non-singleton fuzzy logic systems. |

IJCNN | Introduction to self-supervised learning and its applications | Supervised deep learning has seen tremendous success, but relies on large amounts of carefully labeled data that are not readily available in many application domains. This inspired the development of alternative approaches, among which self-supervised learning has seen rapid progress and quick adoption in various fields, ranging from natural language processing to image and speech recognition. In self-supervised learning, neural networks are trained on large, unlabeled datasets using pretext tasks (pretraining) before being trained on a downstream task in a supervised manner on (usually much) smaller, labeled datasets (fine-tuning). Self-supervised learning approaches promise to produce robust models that can be adapted to different tasks in a data-efficient way and even begin to outperform supervised approaches.\r\n\r\nThis tutorial provides an overview and introduction to self-supervised learning for a broad audience, ranging from practitioners to researchers and engineers, who want to become familiar with this exciting and emerging area of deep learning. We will focus on key concepts and ideas, including contrastive and non-contrastive approaches and their applications in various domains. The tutorial will also provide a hands-on experience where participants can try out self-supervised learning using Python and Jupyter Notebooks. |

IJCNN | Introduction to Hardware-Aware Neural Architecture Search | As deep neural networks (DNNs) can have complex architectures with millions of trainable parameters, their design and training are difficult even for highly qualified experts. In order to reduce human effort, neural architecture search (NAS) methods have been developed to automate the entire design process. The NAS methods typically combine searching in the space of candidate architectures and optimizing (learning) the weights using a gradient method. With the aim of reaching desired latency and providing high energy efficiency, specialized hardware accelerators for DNN inference were developed for cutting-edge applications running on resource-constrained devices in recent years. In this direction, hardware-aware NAS methods were adopted to design DNN architecture (and weights) optimally for a given hardware platform. In this tutorial, we survey the critical elements of NAS methods that – to various extents – consider hardware implementation of the resulting DNNs. We will classify these methods into three major classes: single-objective NAS (no hardware is considered), hardware-aware NAS (DNN is optimized for a particular hardware platform), and NAS with hardware co-optimization (hardware is directly co-optimized with DNN as a part of NAS). We emphasize the multi-objective design approach that must be adopted in NAS and focus on co-design algorithms developed for concurrent optimization of DNN architectures and hardware platforms. As most research in this area deals with NAS for image classification using convolutional neural networks, our case studies will be devoted to this application. After attending the tutorial, the participants will understand why and how NAS and hardware co-optimization are currently used to build cutting-edge implementations of DNNs. |

IJCNN | Privacy-preserving machine and deep learning with homomorphicencryption: an introduction | Privacy-preserving machine and deep learning with homomorphic encryption (HE) is a novel and promising research area aiming at designing machine and deep learning solutions able to operate while guaranteeing the privacy of user data. Designing privacy-preserving machine and deep learning solutions requires completely rethinking and redesigning machine and deep learning models and algorithms to match the severe technological and algorithmic constraints of HE. The aim of this tutorial is to provide an introduction to such a complex yet challenging research area by also providing tools and solutions for the design of privacy-preserving convolutional neural networks (CNNs). This tutorial will provide both an algorithmic and technological perspective on this field by integrating theory with a practical coding session in Python. Furthermore, research challenges and available software resources for privacy-preserving deep learning with HE will be commented and described. |

IJCNN | From CNNs to Symbolic Explainable Models to Protection Against Adversarial Attacks | Along with the advent of deep learning and its quick adoption, there is concern about using models that we don’t really understand. And because of this concern, many critical applications of deep learning are hindered. The concern about transparency and trustworthiness of these models is so high that it is now a major research focus of Artificial Intelligence (AI) programs at funding agencies like DARPA and NSF in the US. If we can make deep learning explainable and transparent, the economic impact of such a technology would be in the trillions of dollars.\r\n\r\nOne of the specific forms of Explainable AI (XAI) envisioned by DARPA includes the recognition of objects based on identification of their parts. For example, the form requires that to predict an object to be a cat, the system must also recognize some of the specific features of a cat, such as the fur, whiskers, and claws. Object prediction contingent on recognition of parts provides additional verification for the object and makes the prediction robust and trustworthy.\r\n\r\nThe first part of this tutorial will review some of the existing methods of XAI in general and then those that are specific to computer vision. \r\n\r\nThe second part of this tutorial will cover a new method that decodes a convolutional neural network (CNN) to recognize parts of objects. The method teaches a second model the composition of objects from parts and the connectivity between the parts. This second model is a symbolic and transparent model. Experimental results will be discussed including those related to object detection in satellite images. Contrary to conventional wisdom, experimental results show that part-based models can substantially improve the accuracy of many CNN models. Experimental results also show part-based models can provide protection from adversarial attacks. Thus, a school bus will not become an ostrich with the tweak of a few pixels. |

IJCNN | Neural Network Self-learning: Consequence-driven Systems Theory | The most challenging concept in neural networks learning is self learning. It is learning with no teacher of any kind, neither providing advice nor providing reinforcement. \r\n\tThis tutorial presents the self-learning paradigm in the framework of the theory of consequence driven systems, where the consequence is viewed as crucial concept of intelligent systems, and systems are viewed in both behavioral and genetic environment.. \r\n\tStarting with neural networks approach of machine learning, the tutorial first explains the concept of supervised learning, where there is an explicit advice given to students how to behave in a situation. Deep learning is current extension of supervised learning. Transfer learning appears in the process, because a previous lecture influences the learning in a current lecture. The tutorial continues with reinforcement learning, where there is no advice, but there is an explicit external reinforcement in terms of reward or punishment. The basic paradigm of immediate reinforcement is considered, as well as the delayed reinforcement learning challenge. The first solution of learning with delayed rewards given by Crossbar Adaptive Array (CAA) neural network is presented, as well as the later solutions including Q learning. Then the tutorial turns to self learning paradigm, which assumes no external teacher of any kind, no reinforcement, no advice. The CAA is an example of a neural network using self-learning. It computes both cognition (action) and emotion in a crossbar fashion, in the same memory structure. It introduced in neural networks, the concepts of emotion, emotion backpropagation, and genetic chromosomes, among others. It was the first Artificial Intelligence system that contributed to the Zajonc-Lazarus emotion-cognition primacy debate in Psychology. The CAA and consequence driven systems theory also include the concepts of motivation, curiosity, and personality, which will be discussed in the tutorial. |

IJCNN | Brain-inspired spiking neural networks for deep learning and knowledge representation: Methods, Systems, Applications | The 2 hour tutorial demonstrates that the third generation of artificial neural networks, the spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The tutorial consists of 3 parts:\r\n1.\tBrain-inspired SNN for deep learning. NeuCube.\r\n2.\tDesign of SNN systems in NeuCube\r\n3.\tApplications for brain and other data modelling\r\n\r\nPart 1 presents the theoretical background of SNN, deep learning algorithms and knowledge representation in SNN. It introduces a brain-inspired SNN NeuCube.\r\nPart 2 teaches and illustrates on a case study data step-by-step how to develop SNN systems using NeuCube (free software and open source along with a cloud-based version available from www.kedri.aut.ac.nz\/neucube and www.neucube.io). \r\nPart 2 and Part 3 demonstrate the use of NeuCube on case studies, including: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; AD prediction; personalised stroke prediction; understanding depression; predicting environmental hazards and extreme events. It is demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). This opens the way to build a new type of AI systems – the open and transparent AI. \r\nReference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, \r\nhttps:\/\/www.springer.com\/gp\/book\/9783662577134 |

IJCNN | Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting | This tutorial will first introduce the main randomization-based feedforward learning paradigms with closed-form solutions. The popular instantiation of the feedforward neural networks is called random vector functional link neural network (RVFL) originated in the early 1990s. Other feedforward methods included in the tutorials are random weight neural networks (RWNN), extreme learning machines (ELM), Stochastic Configuration Networks (SCN), Broad Learning Systems (BLS), etc. Another randomization-based paradigm is the random forest which exhibits highly competitive performances in batch mode classification. Another paradigm is based on the kernel trick such as kernel ridge regression which includes randomization for scaling to large training data. The tutorial will also consider computational complexity with the increasing scale of the classification\/forecasting problems. The tutorial will also present extensive benchmarking studies using classification and forecasting datasets. |

IJCNN | Ethical Challenges in Computational Intelligence Research | Ethical considerations are getting increased attention with regards to providing responsible personalization for robots and autonomous systems. This is partly as a result of the currently limited deployment of such systems in human support and interaction settings. The tutorial will give an overview of the most commonly expressed ethical challenges and ways being undertaken to reduce their impact using the findings in an earlier undertaken review ( https:\/\/www.frontiersin.org\/articles\/10.3389\/frobt.2017.00075\/full ) and a supplemented with recent work and initiatives. The includes the identified challenges in a “Statement on research ethics in artificial intelligence” ( https:\/\/www.forskningsetikk.no\/globalassets\/dokumenter\/4-publikasjoner-som-pdf\/statement-on-research-ethics-in-artificial-intelligence.pdf ). The tutorial will exemplify the challenges related to privacy, security and safety through several examples from own and others’ work. |

IJCNN | Experience Replay for Deep Reinforcement Learning | Reinforcement learning is expected to play an important role in our AI and machine learning era, this is evident by the latest major advances, particularly in games. This is due to its flexibility and arguably minimum designer intervention especially when the feature extraction process is left to a robust model such as a deep neural network. Although deep learning alleviated the long-standing burden of manual feature design, another important issue that remains to be tackled is the experience-hungry nature of RL models which is mainly due to bootstrapping and exploration. One important technique that will play a central role in tackling this issue is experience replay. Naturally, it allows us to capitalize on the already gained experience and to shorten the time needed to train an RL agent. The frequency and depth of the replay can vary significantly and currently a unifying view and a clear understanding of the issues related to off-policy and on-policy replay is generally lacking. For example, on the far end of the spectrum, extensive experience-replay, although should conceivably help reduce the data intensity of the training period, when done naively, put significant constraints on the practicality of the model and requires both extensive time and space complexity that can grow exponentially; relegating the method impractical. In this tutorial we will be tackling the issues and techniques related to the theory and application of deep reinforcement learning and experience replay, and how and when these techniques can be applied effectively to produce a robust model. In addition, we will promote a unified view of experience replay that involves replaying and re-evaluation of the target updates. . What is more, we will show that the generalized intensive experience replay method can be used to derive several important algorithms as special cases of other methods including n-steps true online TD and LSTD. This surprising but important view can help the RL community |