Page Not Found
Page not found.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found.
About me
This is a page not in th emain menu
Published in SIURO, Volume 6, 2013
This work explores the use of contextual shape descriptors for the purposes of video stabilization. We apply it to a particular pathological video, consisting of a sketch of a sea shell which contains a high degree of self-similarity at various scales.
Recommended citation: Meng, Ling Han, Joseph Geumlek, Holly Chu, Justin Hoogenstyrd, and Ernie Esser (2013). "Contextual point matching for video stabilization." SIAM Undergraduate Research Online. Volume 6. http://evoq-eval.siam.org/Portals/0/Publications/SIURO/Vol6/CONTEXTUAL_POINT_MATCHING_FOR_VIDEO_STABILIZATION.pdf?ver=2018-04-06-151848-293
Published in Conference on Uncertainty in Artificial Intelligence, 2016
This paper explores how posterior sampling, a popular means for Bayesian data analysis, can be performed privately. In particular, it explores the theoretical benefits achieved by adding noise to the sufficient statistics of exponential family models, compared to other approaches that do not leverage these distributional traits. The theory explores the asymptotic relative efficiency of these samples a useful categorization of a mechanisms behavior as data sizes grow.
Recommended citation: Foulds, James, Joseph Geumlek, Max Welling, and Kamalika Chaudhuri (2016). "On the theory and practice of privacy-preserving Bayesian data analysis." Conference on Uncertainty in Artificial Intelligence. https://arxiv.org/abs/1603.07294
Published in Advances in Neural Information Processing Systems, 2017
Following from previous work, here we further examine privatized methods for releasing samples from exponential family distributions. Compared to the previous work, here we use Rényi Differential Privacy as the privacy framework. Among other things, this framework permits us to use the stabilizing effect of a prior distribution to increase the privacy guarantees. This work explores how you can achieve privacy either through weakening the Bayesian update of the observations, or through strengthening the prior distribution.
Recommended citation: Geumlek, Joseph, Shuang Song, and Kamalika Chaudhuri (2017). "Rényi differential privacy mechanisms for posterior sampling." Advances in Neural Information Processing Systems. http://papers.nips.cc/paper/7113-renyi-differential-privacy-mechanisms-for-posterior-sampling.pdf
Published in IEEE International Symposium on Information Theory, 2019
We present an alternative privacy framework, focusing on obscuring the identity of specific generating distributions, rather than on specific observed values. This change has benefits and drawbacks, explored by this work. We present some basic mechanisms that can achieve these guarantees while also exploiting the utility gains offered by changing the privacy framework. More complicated mechanisms are left as future work.
Recommended citation: Geumlek, Joseph, and Kamalika Chaudhuri (2019). "Profile-based privacy for locally private computations." IEEE International Symposium on Information Theory https://arxiv.org/abs/1903.09084
Published in Advances in Neural Information Processing Systems, 2019
This work studies iterative data analysis using the mathematical theory of diffusion and the mixing of Markov chains. In doing so, the privacy guarantees of the uncertainty added in each iteration can be compounded. It provides a cleaner theoretical framework for the analysis of iterated Noisy Stochastic Gradient Descent, recovering the privacy guarantees already known for that setting, while also showing the flexibility of this new analysis.
Recommended citation: Balle, Borja, Gilles Barthe, Marco Gaboardi, and Joseph Geumlek (2019). "Privacy amplification by mixing and diffusion mechanisms." Advances in Neural Information Processing Systems, https://papers.nips.cc/paper/9485-privacy-amplification-by-mixing-and-diffusion-mechanisms.pdf
Published in UC San Diego (eScholarship), 2020
This is my doctoral thesis, synthesizing all of my work completed during my PhD program at UCSD. It explores privacy notions for releasing samples and various methods for achieving them, ranging from privatizing sufficient statistics, changing the posterior update rule, combining soruces of uncertainty through the framework of diffusions, and hiding generating distributions through a convex optimization task.
Recommended citation: Geumlek, J. D. (2020). Sampling from Distributions under Differential Privacy Notions. UC San Diego. ProQuest ID: Geumlek_ucsd_0033D_19410. Merritt ID: ark:/13030/m5p03hfr. Retrieved from https://escholarship.org/uc/item/1dt1m5v1 https://escholarship.org/uc/item/1dt1m5v1
teaching training, University of California San Diego, Teaching + Learning Commons, 2018
An academic year-long training program for first time instructors.
undergraduate course, University of California San Diego, 2019
Two offerings (Summer Sessions 1 and 2) of a 5 week intro course to machine learning.
undergraduate course, University of California San Diego, 2020
Academic year offering of a 10 week intro course to probabilistic modelling.
undergraduate course, University of California San Diego, 2020
Remote instruction of an undergraduate course during the summer sessions of 2020.