Image
We organise talks on various topics in Computer Science & Engineering semimonthly. You can find the details of upcoming and previous talks on this page along with the slides and videos of the previous talks.

If you are interested to attend the talks, please fill this form to subscribe to our mailing list. You can also follow us on Facebook, Twitter, and LinkedIn for regular updates.

30 April

Time: 6:00 pm, IST

Kalyanmoy Deb

Kalyanmoy Deb
Koenig Endowed Chair Professor, Michigan State University

 


Title of the talk: Explainable AI (XAI) using Nonlinear Decision Trees

Abstract: For many years, practitioners have been interested in solving optimization and AI related problems to find a single acceptable solution. With the advent of efficient methodologies and human quest for knowledge, optimization and AI systems are now embedded with existing knowledge or modified to extract essential knowledge with which they solve problems. Efforts to explain AI systems with interpretable rules are also getting attention. In this talk, we present knowledge-driven optimization and explainable AI applications using a novel nonlinear decision tree approach in solving a variety of practical problems

Image

Title of the talk: Decision-making in the face of uncertainty

Abstract: Future is about a large complex system of systems that need to operate in an increasingly dynamic environment where the changes cannot be deduced a-priori. Typically, a complex system of systems is understood in terms of its various parts and interactions between them. Moreover, this understanding is typically partial and uncertain from which the overall system behavior emerges over time. With the overall system behavior hard to know a-priori and conventional techniques for system-wide analysis either lacking in rigor or defeated by the scale of the problem, the current practice often exclusively relies on human expertise for analysis and synthesis leading to decision-making. This is a time-, effort-, cost- and intellect-intensive endeavor. The talk will present an approach aimed at overcoming these limitations and also illustrate its efficacy on a few representative real-world problems.


Image

Title of the talk: Brain Variable Reward Structure for Cooperative Machine Learning in IoT Network

Abstract: Recent advances in machine learning research have resulted in state-of-the-art techniques where the Reinforcement Learning (RL) agents are focused on either using value-based methods or policy-based methods with the goal of reducing variance in the reward signal, thereby trying to reach an optimal state in the shortest period. Metrics such as the number of iterations taken to reach optimal reward structure or the number of interactions needed with the environment to achieve this are generally used as key performance indicators. There is a large body of research work that shows how the agents can achieve this using either large amounts of training data or using complex algorithms that require power and resource-intensive computational elements. But such a strategy may not be applicable for resource and power-sensitive network of IoT devices and more importantly differs fundamentally from how humans learn. To overcome this challenge, we have been looking at the field of neuroscience to derive inspiration from how the human brain works, specifically towards the release of dopamine in response to variable reward structure. Typical RL systems focus on receiving observations from the environment, calculating reward, and then deciding on the next set of actions at fixed intervals or based on fixed responses. However, scientific research on human brain activity has shown higher activity in dopamine release in response to rewards received at variable times. In this presentation, findings on two particularly interesting areas in neuroscience and psychology are presented, one is related todopamine-based reward-stimulated learning which supports the concept of cooperative learning. It has been shown that the active dopamine release activity will be available to increase the processing of new information. Second, is related to the study that has found that cooperative groups generate more participation and stimulate multiple brain regions. In such an environment, the efficiency of the network increases dramatically. We will discuss how RL techniques can learn from this behavior, especially in an IoT system that may contain several nodes. Rather than expecting agents running on all the nodes at fixed time intervals, our research investigates the efficiency gain by invoking agents at different time instances, thereby providing them with an opportunity to receive reward signals. Just like variable reward structure results in increased dopamine activity in human brains, such an approach can help achieve higher efficiency in IoT systems.


Image

Title of the talk: From Smart-Sensing to Smart Living

Abstract: We live in an era in which our physical and personal environments are becoming increasingly intertwined and smarter due to the advent of pervasive sensing, wireless communications, computing, control and actuation technologies. Indeed, our daily lives in smart cities and connected communities depend on a wide variety of cyber-physical infrastructures, such as smart city, smart energy, smart transportation, smart healthcare, smart manufacturing, etc. Alongside, the availability of wireless sensors, Internet of Things (IoT) and rich mobile devices are empowering humans with fine-grained information and opinion collection through crowdsensing about events of interest, resulting in actionable inferences and decisions. This synergy has led to cyber-physical-social (CPS) convergence with human in the loop that exhibits complex interactions, inter-dependence and adaptations between the engineered/natural systems and human users with a goal to improve human quality of life and experience in smart living environments. However, huge challenges are posed by the scale, heterogeneity, big data, social dynamics, and resource limitations in sensors, IoT and CPS networks. This talk will highlight unique research challenges in smart living , followed by novel frameworks and models for efficient mobility management, data gathering and fusion, security and trustworthiness, and trade-off between energy and information quality in multi-modal context recognition. Case studies and experimental results from smart energy and smart healthcare applications will be presented. The talk will be concluded with directions of future research.


Image

Title of the talk: What would make an intelligent system generally intelligent?

Abstract: Intelligence may be understood as the ability of a system to construct models, usually in the service of solving problems or controlling behavior. Some problems are general enough to require a unified model of the environment of the system, including the intelligent system itself, and the nature and results of its interactions. While many complex organisms implement such models, comparatively little AI research has been dedicated to them. What types of models and algorithms can support such a degree of generality? Can we identify design principles of biological and social systems that we can transfer into AI systems?


Image

Title of the talk: Participatory Budgeting - Making Budgeting Great Again

Abstract: Participatory Budgeting is a grassroots, direct-democracy approach to deciding upon the usage of public funds, most usually in the context of the yearly budget of a municipality. I will discuss this concept, provide some appropriate mathematical models for it, and will concentrate on related algorithmic considerations, most notably novel aggregation methods and generalizations such as incorporations of liquid democracy, project interactions, and the possibility of deciding on hierarchical budgets.


Image

Title of the talk: Trustworthy AI Systems.

Abstract: We are experiencing unprecedented growth in AI in recent years. Coupled with it, we have also seen simple adversarial attacks and bias in AI systems. In this talk, we will look at how to build trustworthy AI systems to prevent and thwart attacks on AI systems and protect AI models. Specifically, we will discuss threat models extended to AI systems for adversarial attacks and their mitigation, bias compensation, and taking advantage of advances in blockchain technology and fully homomorphic encryption (FHE) in building AI systems to make them trustworthy.


Image

Title of the talk: Multi-view invariance and grouping for self-supervised learning

Abstract: In this talk I will present our recent efforts in learning representation learning that can benefit semantic downstream tasks. Our methods build on two simple yet powerful insights - 1) The representation must be stable under different data augmentations or "views" of the data; 2) The representation must group together instances that co-occur in different views or modalities. I will show that these two insights can be applied to weakly supervised and self-supervised learning, to image, video, and audio data to learn highly performant representations. For example, these representations outperform weakly supervised representations trained on billions of images or millions of videos; can outperform ImageNet supervised pretraining on a variety of downstream tasks; and have led to state-of-the-art results on multiple benchmarks. These methods build upon prior work in clustering and contrastive methods for representation learning. I will conclude the talk by presenting shortcomings of our work and some preliminary thoughts on how they may be addressed.


Image

Title of the talk: Picking Random Vertices

Abstract: We survey some recent graph algorithms that are based on picking a vertex at random and declaring it to be a part of the solution. This simple idea has been deployed to obtain state-of-the-art parameterized, exact exponential time, and approximation algorithms for a number of problems, such as Feedback Vertex Set and 3-Hitting Set. We will also discuss a recent 2-approximation algorithm for Feedback Vertex Set in Tournaments that is based on picking a vertex at random and declaring it to /not/ be part of the solution.


Image

Title of the talk: “Why do we need to Optimize Deep Learning Models?”

Abstract: Designing deep learning-based solutions is becoming a race for training deeper models with a greater number of layers. While a large-size deeper model could provide competitive accuracy, it creates a lot of logistical challenges and unreasonable resource requirements during development and deployment. This has been one of the key reasons for deep learning models not being excessively used in various production environments, especially in edge devices. There is an immediate requirement for optimizing and compressing these deep learning models, to enable on-device intelligence. In this talk, I will talk about the different deep learning model optimization techniques and the challenges involved in production ready model optimization.