The advent of quantum computers places many widely used cryptographic protocols at risk. In response to this threat, the field of post-quantum cryptography has emerged. The most broadly recognized post-quantum protocols are related to lattices. Beyond their resistance to quantum attacks, lattices are instrumental tools in cryptography due to their rich mathematical structure. In this talk, I will present my work on understanding the complexity of lattice problems and on constructing lattice protocols useful in practical scenarios.
Machine learning models are composed of simple primitives such as matrix multiplication and non-linear transformations. Studying and improving these primitives is critical to advancing the capabilities of ML models: for example, the advent of powerful transformations such as convolutions and self-attention led to breakthroughs in deep learning. However, existing techniques have many drawbacks, including computational inefficiency and difficulty modeling long context.
Modern computers run on top of complex processors, but complexity is the worst enemy of security. Scientists and engineers have consistently tried to develop secure software systems for decades. However, my work shows that new classes of vulnerabilities in complicated processors can break the security guarantees provided by software systems, cryptographic protocols, and privacy technologies. In this talk, I will give an overview of my work on discovering, evaluating, and mitigating such vulnerabilities. First, I will talk about side-channel attacks on cryptographic implementations.
Differential privacy has become a de facto standard for extracting information from a dataset while protecting the confidentiality of individuals whose data are collected. It has been increasingly adopted in industry and the public sector. Crucial to any differentially private system is a set of privacy mechanisms, the building blocks of larger privacy-preserving algorithms. Those privacy mechanisms inject randomness into non-private computations in order to ensure privacy protections.
In this talk, I'd like to upend the notion that a computing system needs reliable power to support useful computation, sensing, and interaction. For decades, typical computing systems have generally assumed stable, reliable power from a battery or wall outlet. All our smart devices (i.e., wireless sensing and computing systems), from FitBits to Game Boys, have been powered by batteries. This is a problem: batteries are bulky, expensive, high-maintenance, and not sustainable for the next trillion devices.
With the recent proliferation of countries and companies with access to low earth orbit (and beyond), a host of geopolitical and geoeconomic issues have arisen which require diplomacy informed by sound science and technology analysis. Space Diplomacy needs to consider the involvement of multiple nation-state and private sector actors, burgeoning space infrastructure and satellite internet service providers, launch and space junk scenarios, and deep space regulatory and extractive industry issues.
There is no doubt that robots will play a crucial role in the future and need to work as a team in increasingly more complex applications. Advances in robotics have laid the hardware foundations for building large-scale multi-robot systems, such as for mobile robots and drones.
Computational game theory studies optimal decision making in multi-agent interactions ("games") under imperfect information and strategic behavior. While much work in the past has focused on solving decision-making problems in which all agents act once and simultaneously, I will focus on the more realistic case in which each agent faces a tree-form decision problem with potentially multiple acting points and partial observations about the environment ("extensive-form games").
In this talk I will discuss recent progress towards using human input to enable safe and robust autonomous systems. Much work on robust machine learning and control seeks to be resilient to, or completely remove the need for, human input. By contrast, my research seeks to directly and efficiently incorporate human input into the study of robust AI systems. One problem that arises when robots and other AI systems learn from human input is that there is often a large amount of uncertainty over the human's true intent and the corresponding desired robot behavior.