If you are interested to attend the talks, please fill this form to subscribe to our mailing list. You can also follow us on Facebook, Twitter, LinkedIn for regular updates, and our YouTube channel for recordings of previous lectures.
Date: Fri, 13 Sept 2024
Time: 04:15 PM IST
Venue: Seminar Room, CSE
Dr. Sidharth Sharma
IIT Indore
Title of the talk: SLA-Aware and Verifiable Service Configurations in Software-defined Networks
Abstract: In this seminar, I will present our work on integrating key service level agreement (SLA) parameters — such as delay, bandwidth, and availability — within software-defined networks (SDNs). I will then discuss our research on real-time verification of SDN data planes, focusing on a novel method to ensure the correctness of telecom network configurations.
Title of the talk:MANUS: Markerless Grasp Capture using Articulated 3D Gaussians.
Abstract: Understanding how we grasp objects with our hands is crucial for advancements in fields like robotics and mixed reality. However, this is a challenging problem that demands precise representation of both the hand and the objects involved. Existing methods often rely on skeletons, meshes, or parametric models that fail to accurately capture hand shape, leading to imprecise contact estimations. We introduce MANUS, a novel method for Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians. Our approach extends 3D Gaussian splatting [33] to create a high-fidelity representation of articulating hands. By leveraging Gaussian primitives optimized from multi-view pixel-aligned losses, our method efficiently and accurately estimates contact points between the hand and the object. For optimal accuracy, our method requires multiple camera views, which current datasets typically lack. To address this, we developed MANUS-Grasps, a new dataset that includes hand-object grasps captured from over 50 camera views across 30+ scenes, involving 3 subjects and totaling over 7 million frames. Alongside extensive qualitative results, we demonstrate that our method surpasses existing approaches in a quantitative contact evaluation that uses paint transfer from the object to the hand.
Title of the talk: Parameterized Complexity of Kidney Exchange Problem
Abstract: There are more than 90,000 people on the national transplant waiting list in need of a kidney in the United States. These patients often have a friend or family member who is willing to donate, but whose kidney type might not be compatible. To help match these patients to suitable donors, patient-donor compatibility can be modeled as a directed graph. Specifically, in the Kidney Exchange problem, the input is a directed graph G, a subset B of vertices (altruistic donors), and two integers l_p and l_c. An altruistic donor is a donor who is not paired with a patient, and the remaining vertices are patient-donor pairs. Whenever a donor is compatible with a patient from a patient-donor pair, we place a directed edge from the donor vertex to the patient-donor pair. Here the donor vertex can be either altruistic or non-altruistic. The goal is to find a collection of vertex-disjoint cycles and paths covering the maximum number of patients such that each cycle has length at most l_c and each path has length at most l_p and begins at a vertex in B. The path and cycle lengths are bounded so that the surgeries can be performed simultaneously. Kidney Exchange has received a great deal of attention in recent years [IJCAI '18, IJCAI '22, IJCAI '23, AAAI '17, NeurIPS '20, EC '20]. In this talk we discuss our most recent work from IJCAI ‘24 on the parameterized complexity of the Kidney Exchange Problem. First we show that Kidney Exchange is FPT when parameterized by the number of vertex types in G. Two vertices have the same vertex type if they have the same in- and out-neighborhoods. On the other hand we also show that Kidney Exchange is W[1]-hard when parameterized by treewidth. Finally we design a randomized $4^t n^O(1)$-time algorithm parameterized by t, the number of patients helped, significantly improving upon the previous state of the art, which was $161^t n^O(1)$ [IJCAI '22].
Title of the talk: LiNoVo: Longevity Enhancement of Non-Volatile Cache in Chip Multiprocessors
Abstract: The increasing use of chiplets, and the demand for high-performance yet low-power systems, will result in heterogeneous systems that combine both CPUs and accelerators (e.g., general-purpose GPUs). Chiplet based designs also enable the inclusion of emerging memory technologies, since such technologies can reside on a separate chiplet without requiring complex integration in existing high-performance process technologies. One such emerging memory technology is spin-transfer torque (STT) memory, which has the potential to replace SRAM as the last-level cache (LLC). STT-RAM has the advantage of high density, non-volatility, and reduced leakage power, but suffers from a higher write latency and energy, as compared to SRAM. However, by relaxing the retention time, the write latency and energy can be reduced at the cost of the STT-RAM becoming more volatile. The retention time and write latency/energy can be traded against each other by creating an LLC with multiple retention zones. With a multi-retention LLC, the challenge is to direct the memory accesses to the most advantageous zone, to optimize for overall performance and energy efficiency. We propose ARMOUR, a mechanism for efficient management of memory accesses to a multi-retention LLC, where based on the initial requester (CPU or GPU) the cache blocks are allocated in the high (CPU) or low (GPU) retention zone. Furthermore, blocks that are about to expire are either refreshed (CPU) or written back (GPU). In addition, ARMOUR evicts CPU blocks with an estimated short lifetime, which further improves cache performance by reducing cache pollution. Our evaluation shows that ARMOUR improves average performance by 28.9% compared to a baseline STT-RAM based LLC and reduces the energy-delay product (EDP) by 74.5% compared to an iso-area SRAM LLC.
Title of the talk: Objects, Actions, and their Interplay: What we can do using limited labels?
Abstract: There are large collections of “How-to videos” on the web. Can we achieve a fine-grained interpretation of objects, actions, and their interplay within these videos? This question sets the stage for our exploration. In this talk, I will be talking about our recently developed models on visual relationship localization and open-set object detection in the videos, emphasizing our approach's reliance on limited supervision and showcasing their potential practical industrial applications. I will finally conclude the talk by discussing open areas in this space.
Title of the talk: CCPC: Payment Channel Across the Blockchain Networks.
Abstract: Blockchain technology has revolutionized the world of finance and decentralized applications, yet challenges in scalability and interoperability persist. As these networks expand, their limited transaction throughput hinders their potential for widespread adoption and mass scalability. Additionally, the lack of seamless interoperability between diverse blockchain platforms further hampers the full potential of blockchain technology. This paper introduces the Cross-Chain Payment Channel (CCPC) protocol, a novel solution that not only enables interoperability between different blockchain networks but has the potential to enhance scalability. The CCPC protocol enables secure and efficient off-chain fund transfers between two accounts residing in different blockchains, effectively breaking down the barriers of isolated and fragmented ecosystems. Through the establishment of direct payment channels between different blockchain networks, the CCPC protocol allows parties to engage in multiple cross-chain swaps without involving the main blockchain each time. This protocol can further be utilized in a blockchain sharding system to potentially reduce the fraction of cross-shard transactions recorded at the layer-1 blockchain. By doing so, the protocol can potentially improve latency and transactional throughput, presenting an ideal solution for overcoming scalability barriers while maintaining security. We have successfully implemented the CCPC protocol for Ethereum-based blockchains, establishing a cross-chain payment channel between two disparate blockchains. Through extensive experiments, we obtained essential metrics, such as gas usage and time, at various stages of cross-chain payment channel interactions.
Title of the talk: Frontiers of NLP in age of GenAI
Abstract: Large and very large language models have been instrumental in advancing the field of NLP and its usefulness in real-world use cases. GPT and various of its consecutively mightier successors, have proven that more and more complex tasks can be solved by repurposing the underlying text generation tasks. Few of the most popular products using these models are ChatGPT and Bing Chat. This talk will focus on discussions on what goes into building such tools. The discussions will connect dots between these tools and some basics of models and tasks that are typically covered in curriculums.
Title of the talk: Quo Vadis, Vision and Language (Part-I): Drawing as Complementary to Natural Language for Cross-Modal Tasks
Abstract: In this talk series, we will go deeper into the exciting intersection of vision and language research. Part I of this talk series will focus on the role of drawings as a cognitive tool and their potential in cross-modal tasks. We shall explore how drawings, a timeless medium of communication, can complement or sometimes even replace natural language in various applications. Our research examines sketch+text for Image Retrieval, sketch-guided object localization, and sketch-guided image inpainting. We shall also touch on future directions in this rapidly evolving field, considering recent breakthroughs in language understanding and their implications for vision and language models. (PS: Three to four talks in this series shall be delivered over the next couple of months under the department talk series).
Title of the talk: Machine Learning and Blockchain-based Urban Computing for Sustainable Smart Society
Abstract: Urban computing is a process of acquisition, integration, and analysis of big and heterogeneous data generated by a diversity of sources in urban spaces, such as sensors, devices, vehicles, buildings, and humans, to tackle the major issues that cities face. Machine Learning and Blockchain-based Urban Computing bring powerful computational techniques to bear on such urban challenges as pollution, anomaly detection, and prediction, Attacks in AI-based Systems, Attacks in Device level, energy consumption, and traffic congestion. Using today's large-scale computing infrastructure and data gathered from sensing technologies, urban computing combines Machine Learning and Blockchain with urban planning, transportation, environmental science, sociology, and other areas of urban studies, tackling specific problems with concrete methodologies in a data-centric computing framework. This authoritative treatment of Machine Learning and Blockchain-based Urban Computing offers an overview of the field, fundamental techniques, advanced models, and novel applications.
Title of the talk: Understanding algorithms through proofs
Abstract: One way to prove that an algorithm works correctly is to produce an output together with a proof or, in other words, a transcript which is detailed enough to verify independently. Such a proof guarantees that hard problems such as kidney exchange, scheduling, or combinatorial arguments are solved fairly and correctly. But proofs also tell us about the algorithm that produced them and, being static objects, are easier to analyse. In this talk we will discuss how studying proofs can help us fix errors, discover inefficiencies, and suggest improvements in algorithms for satisfiability and optimization problems.
Title of the talk: Vitess Demystified: Navigating the World of Distributed Databases
Abstract: Vitess is "a battle-hardened opensource technology hosting some of the largest sites on the Internet" (https://planetscale.com/vitess). This talk will briefly introduce 'NewSQL', and talk about Vitess and specifically its architecture. The speaker will also provide us with a primer on what is required to serve queries in distributed databases and run cluster management operations.
Title of the talk: Learning based 3D Digitisation of Humans & Terrains
Abstract: I will provide an overview of ongoing efforts at our research group on exploring novel learning based methods for large scale digitization of humans and terrains with applications in content generation for AR/VR applications. In particular we will focus on large scale rendition of realistic 3D virtual terrain as well digitizing humans and garments from a monocular image for 3D virtual try-on application. Additionally, I will also provide a high level overview on orthogonal research efforts in robotics and cognitive science domain (if time permits).
Title of the talk: Designing Algorithms for Deciphering Cause-Effect Relationships among Variables in Complex Real-world Systems
Abstract: Design and analysis of algorithms for learning the cause and effect relationships between variables from massive datasets with thousands of variables are crucial in recommender systems (eCommerce, youtube), genomics, proteomics, high-frequency stock trading, weather forecasting, space research, neuroscience, etc. In this talk, we will discuss a few of those algorithms, such as Bayesian networks structure learning (BNSL) algorithms, and reflect on what kind of algorithms will need to be built in the future.
Title of the talk: The amazing idiocy of complexity theorists
Abstract: It is said that one should not argue with idiots because they will bring you down to their level and then beat you with experience. This talk will focus on the strategy that the idiot is using and we'll see why it is a really awesome strategy.
Complexity theorists often try to solve problems that are too complicated to think about. By identifying a simple approach and then trying to analyze that one approach in detail, we are bringing the problem down to our level. Since we're now trying to analyze a simpler task, we do sometimes have tools that can solve the problem.
One such tool is query complexity. It is a very simple model yet it has yielded a lot of insight into the nature of computation. In this talk I will show how we can think about certain approaches like linear search, binary search, period finding and gradient descent through the lens of query complexity. We will then analyse the specific case of gradient descent. This work on gradient descent was joint work with Ankit Garg, Robin Kothari and Praneeth Netrapalli.
Title of the talk: Application-driven Graphical Models
Abstract: The talk will introduce a set of interesting computer vision applications and explain how graphical models are used to approximate the underlying challenges involved in those applications and to achieve acceptable solutions.
Title of the talk:All Paths Lead to Rome
Abstract: The Japanese puzzle game of Roma (see https://www.janko.at/Raetsel/Roma/index.htm for a playable version of this game) is played on a square grid consisting of quadratic cells. The cells are grouped into boxes of at most four neighboring cells, and are filled (or are to be filled) with arrows pointing in cardinal directions. The goal of the game is to fill up the empty cells with arrows such that each box contains at most one arrow of each direction, and regardless of where we start from, if we follow the arrows, we will always end up in the special Roma-cell. In this talk, we will explore the computational complexity of Roma
Title of the talk: Time-Space Tradeoffs for Collisions in Hash Functions
Abstract: Cryptographic hash functions are functions that take arbitrary length inputs and output fixed length digest. They are one of the most important cryptographic primitives and widely used in applications today. Apart from the compression requirement, the applications using these functions could need additional properties to be provably secure. One such, perhaps the most important property is collision resistance.
This property has been well studied for uniform adversaries. However, uniform adversaries fail to capture many real-world adversaries. Hence, several recent works have studied the collision resistance property for non-uniform adversaries. Analyzing non-uniform adversaries presents several challenges. That is why Dodis et al in their EUROCRYPT 18 paper presented a reduction to another (easier to analyze) model named Bit-fixing model.
In our CRYPTO 20 paper, we showed that adversaries in this Bit-fixing model are too strong when the length of the collisions are bounded. We also showed a reduction to the Multi-instance model, which helped us obtain better results for restricted parameter ranges. In our recent CRYPTO 22 paper, we further explored the relation between the Bit-fixing model and the Multi-instance model and further improved the results with our new findings.
The talk will include some preliminary definitions, detailed description of these models, results and a high level idea of the techniques from all the relevant works.
Title of the talk: Specialized Systems for Neural Network Workloads.
Abstract: The gap between the processing speed of the CPU and the access speed of the memory is becoming a bottleneck for many emerging applications. This gap can be reduced if the computation can be taken closer to the memory through near-memory processing (NMP). Among the logic units, application-specific integrated circuits (ASICs) are highly efficient in terms of power and area overhead for NMP integration. We aim to accelerate neural network workloads by integrating custom hardware near the memory.
As CNNs are widely used in several applications, the designed hardware can be extensively used in all such cases. To design an NMP-based system with high performance and energy efficiency, we explore various optimization techniques, such as leveraging parallelism, exploiting data sparsity, and utilizing computation redundancy to reduce the number of computations. All such techniques result in hardware designs that implement the appropriate dataflow and data-parallel algorithm. These designs have positively impacted the system's performance and energy efficiency. To examine the deployability of the NMP approach, we also perform experiments on various memory technologies like 3D memory, hybrid memory, and the commodity DRAM.
The designed systems have performed substantially well when comparing them with multiple baselines and state-of-the-art works.
Title of the talk: A Perceptual Prediction Framework for Self-Supervised Event Detection and Segmentation in Streaming Videos
Abstract: Events are central to the content of human experience. From the constant stream of the sensory onslaught, the brain segments, extracts, represents aspects related to activities, and stores in memory for future comparison, retrieval, and re-storage. This talk will focus on the first problem of event segmentation from video streams. Can we temporally segment activity into its constituent sub-events? Can we spatially localize the event in the image frame? These tasks have been tackled through supervised learning, often requiring large amounts of training data associated with many manual annotations. The question we ask is: can we do these tasks without the need for manual labels? Human perception experiments suggest that we can solve these tasks without requiring high-level supervision. I will share our experience with a set of minimal, self-supervised, predictive learning models that draws inspiration from cognitive psychology and recent brain models from neuroscience. This approach can be used for temporal segmentation and spatial localization of events in the image. We will see results on traditional activity datasets such as Breakfast Actions, 50 Salads, and INRIA Instructional Videos datasets and on ten days of continuous video footage of a bird's nest. The proposed approach can outperform weakly supervised and other unsupervised learning approaches by up to 24% and have competitive performance compared to fully supervised methods.
Title of the talk: Learning with Reject Option
Abstract: The primary focus of classification problems has been on algorithms that return a prediction on every example. However, in many real-life situations, it may be prudent to reject an example rather than run the risk of a costly potential misclassification. Consider, for instance, a physician who has to return a diagnosis for a patient based on the observed symptoms and a preliminary examination. If the symptoms are either ambiguous, or rare enough to be unexplainable without further investigation, then the physician might choose not to risk misdiagnosing the patient. He might instead ask for further medical tests to be performed or refer the case to an appropriate specialist. The principal response in these cases is to “reject” the example. From a geometric standpoint, we can view the classifier as being possessed of a decision surface as well as a rejection surface. The rejection region impacts the proportion of examples that are likely to be rejected, as well as the proportion of predicted examples that are likely to be correctly classified. A well-optimized classifier with a reject option is the one which minimizes the rejection rate as well as the mis-classification rate on the predicted examples. We will discuss some of our recent contributions in this direction.
Title of the talk: How to store a graph?
Abstract: How does one store a graph in the database of a computer? Typically, the vertices are labelled by the set: {1, 2, 3, ..., n}. The edges can be denoted in several different ways: adjacency matrix, incidence matrix, adjacency list. But, what if the vertices are labelled in a somewhat more creative way, so that the labels of the vertices themselves denote their adjacencies? This entirely eliminates all the need for storing the edges! This topic is part of a heavily researched field called graph labelling, with connections to coding theory and information theory. In this talk we will explore a type of graph labelling known as sum labelling. This is based on joint work with Henning Fernau (University of Trier, Germany). Our paper can be accessed using the link: https://eccc.weizmann.ac.il/report/2021/114
Title of the talk: Perspectives on AI Ethics
Abstract: In recent years, we have witnessed widespread adoption of AI tools across various sectors, making them capable of impacting lives. Today AI-powered tools aid critical decision-making in policy, law and order, recruiting, healthcare and education. However, they also bring the massive risk of making wrong decisions, which can turn one’s life upside down, as we have seen with COMPASS in the US judicial system and the A-level grading fiasco in the UK’s education system. Implementation of AI and ML tools without much deliberation on its ethical and social impact has resulted in unfair outcomes, often resulting from algorithmic and data biases. In this talk, I will introduce the concept of AI Ethics, why it is essential and the gaps in the present-day AI Ethics discourse. The talk will also examine the notion of AI Ethics from a social and cultural perspective.
Title of the talk: Compression of Deep Learning Models for NLP
Abstract: RNNs and LSTMs have been used for quite some time for various NLP tasks. But these models are large especially because of the input and output embedding parameters. In the past three-four years, the field of NLP has made significant progress thanks to Transformer based models like BERT, GPT, T5, etc. But these models are humongous in size. Real-world applications however demand a small model size, low response times, and low computational power wattage. In this talk, I will discuss four different types of methods for compression of such models for text, in order to enable their deployment in real industry NLP applications and projects. The four types of methods include pruning, quantization, knowledge distillation, and other Transformer based methods.
Title of the talk: Deep Learning case-studies: Recommender systems for online shopping and Task-oriented Natural Language Generation system
Abstract: In this talk, I present some of my past work that uses deep learning in two completely different contexts: One being recommender systems for online shopping and the other is on task-oriented chat-bots. Deep Learning has seen a wide range of applications of late in both academia and industry. Through these two case studies, we showcase the versatile nature of deep learning in solving complex business problems in two very different domains that are currently trending in the industry - online recommendations and task-oriented chat bots. We also discuss some practical performance metrics and trade-offs that need to be considered for scaling solutions in both the case studies.
Title of the talk: Computational Mechanism Design for Social Decisions
Abstract: Artificial Intelligence deals with machines that take smart decisions. For a large spectrum of decision problems, particularly when the decision involves multiple self-interested agents, such intelligence is beyond the scope of 'learning from data'. In this talk, I am going to address 'strategic multi agent systems' from a game theoretic viewpoint -- a tool used in mathematics and microeconomics to analyze the behavior of rational and intelligent agents -- and show how robust AI systems can be built with such agents. In the process, I will discuss two problems of social importance, namely, social distancing and peer grading, where computational mechanism design approaches are useful. The tools developed based on these ideas will also be briefly presented.
Title of the talk: Parameterized Algorithms for Model Counting
Abstract: Propositional model counting problem (#SAT) is a generalization of SAT, where the aim is to compute the number of satisfying assignments of a formula φ. Model counting is #P-complete even for 2CNF, although checking satisfiability of 2CNF can be done in polynomial time. We consider #SAT parameterized by the treewidth (tw) of the primal graph of the input CNF formula. The best-known algorithm runs in time 2^tw n^O(1), where n is the number of variables. One of the main challenges is whether we can have a faster algorithm, even if we allow a (multiplicative) approximation. In this talk we will see a lower bound on the running time of such algorithms assuming Strong Exponential Time Hypothesis (SETH). For monotone formulas, given a tree decomposition of width w for the primal graph of the input formula, for any \epsilon> 0, we will see a 2^{(1−\epsilon)w} n^O(1)-time 2^{\epsilon n}-approximation algorithm.
Title of the talk: Merely Fun with Algorithms
Abstract: In this talk, we will design and analyze "cool" (beautiful and efficient) algorithms for simple problems that are of recreational nature. Emphasis will be given on fundamental concepts in an interactive manner. Afterall the goal is to have fun!!
Title of the talk: Intelligent Occupant sensing in Car Interiors
Abstract: Traditional occupant sensing methods employ physical sensors and buttons to detect and react to explicit driver/passenger requests in a passenger vehicle. We explore a computer vision based approach to understand implicit requests from the occupants. Our Interior Assist system uses images captured from an in-car camera sensor that are processed by a tiny deep neural network to react to the dynamic scenarios within the passenger vehicle. In this talk we will specifically focus on how gesture recognition can be used for occupant sensing in car interiors to enhance user experience and comfort, what are the challenges faced and some future research directions.
Title of the talk: Matching under preferences: Stability to popularity
Abstract: Matching under preferences is a research area that finds numerous applications in practice as diverse as assigning students to colleges, workers to firms, kidney donors to recipients, users to servers in a distributed internet service, to just name a few. Indeed this topic is at the heart of the intersection between Computer Science and Economics, and has been recognized as such by a Nobel Memorial prize in Economics. In the first part of the talk, I will discuss canonical problems in this area, namely, stable matching and its generalization, popular matching. In the second part of the talk, I will present a result that resolved the arguably main open problem in the subarea of popular matching: In an arbitrary graph, deciding if a popular matching exists is NP-complete.
Title of the talk: Towards Precision Oncology using Machine Learning on Medical Images
Abstract: Current approach to cancer treatment is moving away from a "carpet bombing" to a "surgical strike." This means that each patient's cancer is profiled for specific molecular or genomic characteristics to prescribe targeted therapies. The cost of these tests remain unaffordable by a vast majority of the population in low-income countries, such as India. On the other hand, general pathology and radiology has become ubiquitous even in tier 3 cities. By using the latest advances in machine learning or artificial intelligence, our research group develops and validates tests that can utilize inexpensive medical imaging modalities to analyze subtle visual patterns of various subtypes of cancer to aid precision medicine. In this talk we will give a few examples of successful results and the process that was required to get there.
Title of the talk: Data-efficient Machine Learning for the Diagnosis of Chest Radiographs
Abstract: Modern deep learning methods are data-hungry. While such methods may require millions of annotated training data, humans can learn new concepts with only a few annotated examples. To mimic this incredible cognitive ability of human beings, different methods for data-efficient machine learning have been proposed in recent years. Few-shot learning and Zero-shot learning are two of the most promising avenues among these methods. In the last few years, several few-shot and zero-shot learning methods have been designed for problems related to natural images. However, such methods are relatively rare in the field of radiology diagnosis. In this webinar, we will present several approaches for few-shot and zero-shot diagnosis of chest radiographs. Our methods show promising results on publicly available chest x-ray datasets.
Title of the talk: Transparent, Trustworthy and Privacy-Preserving Supply Chains
Abstract: Over the years, supply chains have evolved from a few regional traders to globally complex chains of trade. Consequently, supply chain management systems have become heavily dependent on digitisation for the purpose of data storage and traceability of goods. However, current approaches suffer from issues such as scattering of information across multiple silos, susceptibility of erroneous or untrustworthy data, inability to accurately capture physical events associated with the movement of goods and protection of trade secrets. Our work aims to address above mentioned challenges related to traceability, scalability, trustworthiness and privacy. To support traceability and provenance, a consortium blockchain based framework, ProductChain, is proposed which provides an immutable audit trail of product's supply chain events and its origin. The framework also presents a sharded network model to meet the scalability needs of complex supply chains. Next, we address the issue of trust associated with the qualities of the commodities and the entities logging data on the blockchain through an extensible framework, TrustChain. TrustChain tracks interactions among supply chain entities and dynamically assigns trust and reputation scores to commodities and traders using smart contracts. For protecting trade secrets, we propose a privacy-preservation framework PrivChain, which allows traders to keep trade related information private and rather return computations or proofs on data to support provenance and traceability claims. The traders are in turn incentivised for providing such proofs. A different privacy-preservation approach for decoupling the identities of traders is explored in TradeChain by managing two ledgers: one for managing decentralised identities and another for recording supply chain events. The information from two ledgers is then collated using access tokens provided by the data owners, i.e. traders themselves.
Title of the talk: Powering the Next Era of Analytics and AI with GPU
Abstract: Artificial Intelligence has played a key role from predicting, minimizing and stalling Pandemic outbreak such as Coronavirus to making autonomous vehicles a reality. The world of computing is going through an incredible change. With deep learning and AI, computers are learning to write their own software. Learn how Deep learning relies on GPU eco system helping the research and data science community to solve use cases in domains like Healthcare, Autonomous Driving, Financial and many more.
Title of the talk: What Is Software Engineering Anyway? Reflections on 50 Years of Software Engineering and the Road Ahead!
Abstract: This talk discusses the nature of software engineering from multiple perspectives, its historical contributions, and presents a future perspective based on current research trends. What can Software Engineering do for the design of AI/ML systems? Are they energy efficient?
Title of the talk: Explainable AI (XAI) using Nonlinear Decision Trees
Abstract: For many years, practitioners have been interested in solving optimization and AI related problems to find a single acceptable solution. With the advent of efficient methodologies and human quest for knowledge, optimization and AI systems are now embedded with existing knowledge or modified to extract essential knowledge with which they solve problems. Efforts to explain AI systems with interpretable rules are also getting attention. In this talk, we present knowledge-driven optimization and explainable AI applications using a novel nonlinear decision tree approach in solving a variety of practical problems.
Title of the talk: Decision-making in the face of uncertainty
Abstract: Future is about a large complex system of systems that need to operate in an increasingly dynamic environment where the changes cannot be deduced a-priori. Typically, a complex system of systems is understood in terms of its various parts and interactions between them. Moreover, this understanding is typically partial and uncertain from which the overall system behavior emerges over time. With the overall system behavior hard to know a-priori and conventional techniques for system-wide analysis either lacking in rigor or defeated by the scale of the problem, the current practice often exclusively relies on human expertise for analysis and synthesis leading to decision-making. This is a time-, effort-, cost- and intellect-intensive endeavor. The talk will present an approach aimed at overcoming these limitations and also illustrate its efficacy on a few representative real-world problems.
Title of the talk: Brain Variable Reward Structure for Cooperative Machine Learning in IoT Network
Abstract: Recent advances in machine learning research have resulted in state-of-the-art techniques where the Reinforcement Learning (RL) agents are focused on either using value-based methods or policy-based methods with the goal of reducing variance in the reward signal, thereby trying to reach an optimal state in the shortest period. Metrics such as the number of iterations taken to reach optimal reward structure or the number of interactions needed with the environment to achieve this are generally used as key performance indicators. There is a large body of research work that shows how the agents can achieve this using either large amounts of training data or using complex algorithms that require power and resource-intensive computational elements. But such a strategy may not be applicable for resource and power-sensitive network of IoT devices and more importantly differs fundamentally from how humans learn. To overcome this challenge, we have been looking at the field of neuroscience to derive inspiration from how the human brain works, specifically towards the release of dopamine in response to variable reward structure. Typical RL systems focus on receiving observations from the environment, calculating reward, and then deciding on the next set of actions at fixed intervals or based on fixed responses. However, scientific research on human brain activity has shown higher activity in dopamine release in response to rewards received at variable times. In this presentation, findings on two particularly interesting areas in neuroscience and psychology are presented, one is related todopamine-based reward-stimulated learning which supports the concept of cooperative learning. It has been shown that the active dopamine release activity will be available to increase the processing of new information. Second, is related to the study that has found that cooperative groups generate more participation and stimulate multiple brain regions. In such an environment, the efficiency of the network increases dramatically. We will discuss how RL techniques can learn from this behavior, especially in an IoT system that may contain several nodes. Rather than expecting agents running on all the nodes at fixed time intervals, our research investigates the efficiency gain by invoking agents at different time instances, thereby providing them with an opportunity to receive reward signals. Just like variable reward structure results in increased dopamine activity in human brains, such an approach can help achieve higher efficiency in IoT systems.
Title of the talk: From Smart-Sensing to Smart Living
Abstract: We live in an era in which our physical and personal environments are becoming increasingly intertwined and smarter due to the advent of pervasive sensing, wireless communications, computing, control and actuation technologies. Indeed, our daily lives in smart cities and connected communities depend on a wide variety of cyber-physical infrastructures, such as smart city, smart energy, smart transportation, smart healthcare, smart manufacturing, etc. Alongside, the availability of wireless sensors, Internet of Things (IoT) and rich mobile devices are empowering humans with fine-grained information and opinion collection through crowdsensing about events of interest, resulting in actionable inferences and decisions. This synergy has led to cyber-physical-social (CPS) convergence with human in the loop that exhibits complex interactions, inter-dependence and adaptations between the engineered/natural systems and human users with a goal to improve human quality of life and experience in smart living environments. However, huge challenges are posed by the scale, heterogeneity, big data, social dynamics, and resource limitations in sensors, IoT and CPS networks. This talk will highlight unique research challenges in smart living , followed by novel frameworks and models for efficient mobility management, data gathering and fusion, security and trustworthiness, and trade-off between energy and information quality in multi-modal context recognition. Case studies and experimental results from smart energy and smart healthcare applications will be presented. The talk will be concluded with directions of future research.
Title of the talk: What would make an intelligent system generally intelligent?
Abstract: Intelligence may be understood as the ability of a system to construct models, usually in the service of solving problems or controlling behavior. Some problems are general enough to require a unified model of the environment of the system, including the intelligent system itself, and the nature and results of its interactions. While many complex organisms implement such models, comparatively little AI research has been dedicated to them. What types of models and algorithms can support such a degree of generality? Can we identify design principles of biological and social systems that we can transfer into AI systems?
Title of the talk: Participatory Budgeting - Making Budgeting Great Again
Abstract: Participatory Budgeting is a grassroots, direct-democracy approach to deciding upon the usage of public funds, most usually in the context of the yearly budget of a municipality. I will discuss this concept, provide some appropriate mathematical models for it, and will concentrate on related algorithmic considerations, most notably novel aggregation methods and generalizations such as incorporations of liquid democracy, project interactions, and the possibility of deciding on hierarchical budgets.
Title of the talk: Trustworthy AI Systems.
Abstract: We are experiencing unprecedented growth in AI in recent years. Coupled with it, we have also seen simple adversarial attacks and bias in AI systems. In this talk, we will look at how to build trustworthy AI systems to prevent and thwart attacks on AI systems and protect AI models. Specifically, we will discuss threat models extended to AI systems for adversarial attacks and their mitigation, bias compensation, and taking advantage of advances in blockchain technology and fully homomorphic encryption (FHE) in building AI systems to make them trustworthy.
Title of the talk: Multi-view invariance and grouping for self-supervised learning
Abstract: In this talk I will present our recent efforts in learning representation learning that can benefit semantic downstream tasks.
Our methods build on two simple yet powerful insights - 1) The representation must be stable under different data augmentations or "views" of the data; 2) The representation must group together instances that co-occur in different views or modalities. I will show that these two insights can be applied to weakly supervised and self-supervised learning, to image, video, and audio data to learn highly performant representations. For example, these representations outperform weakly supervised representations trained on billions of images or millions of videos; can outperform ImageNet supervised pretraining on a variety of downstream tasks; and have led to state-of-the-art results on multiple benchmarks. These methods build upon prior work in clustering and contrastive methods for representation learning. I will conclude the talk by presenting shortcomings of our work and some preliminary thoughts on how they may be addressed.
Title of the talk: Picking Random Vertices
Abstract: We survey some recent graph algorithms that are based on picking a vertex at random and declaring it to be a part of the solution. This simple idea has been deployed to obtain state-of-the-art parameterized, exact exponential time, and approximation algorithms for a number of problems, such as Feedback Vertex Set and 3-Hitting Set. We will also discuss a recent 2-approximation algorithm for Feedback Vertex Set in Tournaments that is based on picking a vertex at random and declaring it to /not/ be part of the solution.
Title of the talk: “Why do we need to Optimize Deep Learning Models?”
Abstract: Designing deep learning-based solutions is becoming a race for training deeper models with a greater number of layers. While a large-size deeper model could provide competitive accuracy, it creates a lot of logistical challenges and unreasonable resource requirements during development and deployment. This has been one of the key reasons for deep learning models not being excessively used in various production environments, especially in edge devices. There is an immediate requirement for optimizing and compressing these deep learning models, to enable on-device intelligence. In this talk, I will talk about the different deep learning model optimization techniques and the challenges involved in production ready model optimization.