publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2024
- Frequentist Regret Analysis of Thompson Sampling with Fractional Posteriors for Generalized Linear BanditPrateek Jaiswal, Debdeep Pati, Anirban Bhattacharya, and 1 more author2024
We propose a variant of the Thompson Sampling (TS) algorithm for solving stochastic generalized linear bandit problems. We call the proposed algorithm α-TS, where we use fractional or α-posterior instead of the standard posterior in TS. Our main contribution is to identify general regularity conditions on the prior and reward distributions to generalize the analysis of α-TS without assuming any tractable representation (or approximation) of the posterior distribution, unlike previous works. The exponential (generalized linear model) and sub-Gaussian family of distributions satisfy our general conditions on the reward distributions. Our general regret bound yields a regret bound of O(d^7/4\sqrtT) for the exponential family and of O(d^2\sqrtT) for the sub-Gaussian family of reward distributions, which is away by a factor of d^3/4 and d, respectively, from the lower bound. Our proof technique combines the recent advancements in the analysis of linear bandit problems with the first and second-order posterior concentration theory in the Bayesian statistics literature.
2023
- Generalized Regret Analysis of Thompson Sampling using Fractional PosteriorsPrateek Jaiswal, Debdeep Pati, Anirban Bhattacharya, and 1 more authorarXiv preprint arXiv:2309.06349, 2023
Thompson sampling (TS) is one of the most popular and earliest algorithms to solve stochas- tic multi-armed bandit problems. We consider a variant of α-TS where we use a fractional posterior instead of the standard posterior distribution. To com- pute a fractional posterior, the likelihood in the definition of the standard posterior is tempered with a factor α. For α-TS we obtain both instance-dependent and instance-independent frequentist regret bounds under very mild conditions on the prior and reward distributions, Both the sub-Gaussian and exponential family models satisfy our general conditions on the reward distribution. Our conditions on the prior distribution just require its density to be positive, continuous, and bounded. We also establish another instance-dependent regret upper bound that matches (up to constants) to that of improved UCB [Auer and Ortner, 2010]. Our regret analysis carefully combines recent theoretical developments in the non-asymptotic concentration analysis and Bernstein-von Mises type results for the fractional posterior distribution. Moreover, our analysis does not require additional structural properties such as closed-form posteriors or conjugate priors.
- On the Statistical Consistency of Risk-Sensitive Bayesian Decision-MakingPrateek Jaiswal, Harsha Honnappa, and Vinayak RaoIn Thirty-seventh Conference on Neural Information Processing Systems, 2023
We study data-driven decision-making problems in the Bayesian framework, where the expectation in the Bayes risk is replaced by a risk-sensitive entropic risk measure. We focus on problems where calculating the posterior distribution is intractable, a typical situation in modern applications with large datasets and complex data generating models. We leverage a dual representation of the entropic risk measure to introduce a novel risk-sensitive variational Bayesian (RSVB) framework for jointly computing a risk-sensitive posterior approximation and the corresponding decision rule. The proposed RSVB framework can be used to extract computational methods for doing risk-sensitive approximate Bayesian inference. We show that our general framework includes two well-known computational methods for doing approximate Bayesian inference viz. naive VB and loss-calibrated VB. We also study the impact of these computational approximations on the predictive performance of the inferred decision rules and values. We compute the convergence rates of the RSVB approximate posterior and also of the corresponding optimal value and decision rules. We illustrate our theoretical findings in both parametric and nonparametric settings with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification.
- Bayesian Joint Chance Constrained Optimization: Approximations and Statistical ConsistencyPrateek Jaiswal, Harsha Honnappa, and Vinayak A. RaoSIAM Journal on Optimization, 2023
This paper considers data-driven chance-constrained stochastic optimization problems in a Bayesian framework. Bayesian posteriors afford a principled mechanism to incorporate data and prior knowledge into stochastic optimization problems. However, the computation of Bayesian posteriors is typically an intractable problem and has spawned a large literature on approximate Bayesian computation. Here, in the context of chance-constrained optimization, we focus on the question of statistical consistency (in an appropriate sense) of the optimal value, computed using an approximate posterior distribution. To this end, we rigorously prove a frequentist consistency result demonstrating the convergence of the optimal value to the optimal value of a fixed, parameterized constrained optimization problem. We augment this by also establishing a probabilistic rate of convergence of the optimal value. We also prove the convex feasibility of the approximate Bayesian stochastic optimization problem. Finally, we demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queueing model.
2020
- Estimating Stochastic Poisson Intensities Using Deep Latent ModelsRuixin Wang, Prateek Jaiswal, and Harsha HonnappaIn 2020 Winter Simulation Conference (WSC), 2020
We present a new method for estimating the stochastic intensity of a doubly stochastic Poisson process. Statistical and theoretical analyses of traffic traces show that these processes are appropriate models of high intensity traffic arriving at an array of service systems. The statistical estimation of the underlying latent stochastic intensity process driving the traffic model involves a rather complicated nonlinear filtering problem. We develop a novel simulation method, using deep neural networks to approximate the path measures induced by the stochastic intensity process, for solving this nonlinear filtering problem. Our simulation studies demonstrate that the method is quite accurate on both in-sample estimation and on an out-of-sample performance prediction task for an infinite server queue.
- Asymptotic Consistency of α-Rényi-Approximate PosteriorsPrateek Jaiswal, Vinayak Rao, and Harsha HonnappaJ. Mach. Learn. Res., Jan 2020
We study the asymptotic consistency properties of α-Rényi approximate posteriors, a class of variational Bayesian methods that approximate an intractable Bayesian posterior with a member of a tractable family of distributions, the member chosen to minimize the α-Rényi divergence from the true posterior. Unique to our work is that we consider settings with α > 1, resulting in approximations that upperbound the log-likelihood, and consequently have wider spread than traditional variational approaches that minimize the Kullback-Liebler (KL) divergence from the posterior. Our primary result identifies sufficient conditions under which consistency holds, centering around the existence of a ’good’ sequence of distributions in the approximating family that possesses, among other properties, the right rate of convergence to a limit distribution. We further characterize the good sequence by demonstrating that a sequence of distributions that converges too quickly cannot be a good sequence. We also extend our analysis to the setting where α equals one, corresponding to the minimizer of the reverse KL divergence, and to models with local latent variables. We also illustrate the existence of good sequence with a number of examples. Our results complement a growing body of work focused on the frequentist properties of variational Bayesian methods.
- Variational Bayesian Methods for Stochastically Constrained System Design ProblemsPrateek Jaiswal, Harsha Honnappa, and Vinayak A RaoIn Symposium on Advances in Approximate Bayesian Inference, Jan 2020
We study system design problems stated as parameterized stochastic programs with a chance-constraint set. We adopt a Bayesian approach that requires the computation of a posterior predictive integral which is usually intractable. In addition, for the problem to be a well-dened convex program, we must retain the convexity of the feasible set. Consequently, we propose a variational Bayes-based method to approximately compute the posterior predictive integral that ensures tractability and retains the convexity of the feasible set. Under certain regularity conditions, we also show that the solution set obtained using variational Bayes converges to the true solution set as the number of observations tends to infinity. We also provide bounds on the probability of qualifying a true infeasible point (with respect to the true constraints) as feasible under the VB approximation for a given number of samples.
- Statistical inference for approximate Bayesian optimal designPrateek Jaiswal, and Harsha HonnappaIn 2020 Winter Simulation Conference (WSC), Jan 2020
This paper studies a generic Bayesian optimal design formulation with chance constraints, where the decision variable lies in a separable, reflexive Banach space. This setting covers a gamut of simulation and modeling problems that we illustrate through two example problem formulations. The posterior objective cannot be computed, in general, and it is necessary to use approximate Bayesian inference. Sampling-based approximate inference, however, introduces significant variance and, in general, leads to non-convex approximate feasible sets, even when the original problem is convex. In this paper, we use variational Bayesian approximations that introduce no variance and retain the convexity of the feasibility set, subject to easily satisfied regularity conditions on the approximate posterior, albeit at the expense of a much larger bias. Our main results, therefore, establish large sample asymptotic consistency of the optimal solutions and optimal value of this approximate Bayesian optimal design formulation.
- Asymptotic consistency of loss-calibrated variational BayesPrateek Jaiswal, Harsha Honnappa, and Vinayak A. RaoStat, Feb 2020
This paper establishes the asymptotic consistency of the loss‐calibrated variational Bayes (LCVB) method. LCVB is a method for approximately computing Bayesian posterior approximations in a “loss aware” manner. This methodology is also highly relevant in general data‐driven decision‐making contexts. Here, we establish the asymptotic consistency of both the loss‐ calibrated approximate posterior and the resulting decision rules. We also establish the asymptotic consistency of decision rules obtained from a “naive” two‐stage procedure that first computes a standard variational Bayes approximation and then uses this in the decision‐making procedure.
2018
- Optimal Allocations for Sample Average ApporximationPrateek Jaiswal, Harsha Honnappa, and Raghu PasupathyIn 2018 Winter Simulation Conference (WSC), Feb 2018
We consider a single stage stochastic program without recourse with a strictly convex loss function. We assume a compact decision space and grid it with a finite set of points. In addition, we assume that the decision maker can generate samples of the stochastic variable independently at each grid point and form a sample average approximation (SAA) of the stochastic program. Our objective in this paper is to characterize an asymptotically optimal linear sample allocation rule, given a fixed sampling budget, which maximizes the decay rate of probability of making false decision.