veftm.blogg.se

Cu boulder dlc collaboratory
Cu boulder dlc collaboratory






cu boulder dlc collaboratory

Here's a quick blog post that treads some similar ground, people may want to look at that ahead of time if they're curious ( )

cu boulder dlc collaboratory

TITLE: Presenting Simmons, Nelson, and Simonsohn's 2011 article "False Positive Psychology" ( ) SPEAKER: Peter Shaffery PhD student in Applied Mathematics, CU Boulder, with Vanja Dukic I will conclude with discussions on research directions and potential applications in other application areas. Example applications include network revenue management, medical appointment scheduling, and queueing control. In this talk, I report some recent applications and theoretical results in this area of research. A major challenge of the approach therefore lies in efficient solution of the ALPs. The linear programming formulations are called approximate linear programs (ALPs) and typically have a large number of decision variables and constraints. In this approach, high dimensional dynamic programs are solved approximately as large-scale linear programs to tackle the curse of dimensionality. The linear programming based approximate dynamic programming has received considerable attention in the recent literature. TITLE: Some Recent Results on Linear Programming Based Approximate Dynamic Programming

cu boulder dlc collaboratory

SPEAKER: Dan Zhang Associate Professor of Operations Management, Leeds School of Business, CU Boulder We will discuss how randomized dimensionality reduction techniques allow us to obtain highly accurate and efficient low-rank approximations compared to other state-of-the-art Nystrom methods. Moreover, we will talk about the Nystrom method for generating low-rank approximations of kernel matrices that arise in many machine learning problems. Specifically, we will present a randomized algorithm for K-means clustering in the one-pass streaming setting that does not require incoherence or distributional assumptions on the data. In this talk, we will focus on two important topics in modern data analysis: (1) K-means clustering and (2) low-rank approximation of kernel matrices for analyzing datasets with highly complex and nonlinear structures. However, such methods require strong theoretical understanding to ensure that the key properties of original data are preserved. In particular, randomized dimensionality reduction techniques are effective in modern data settings since they provide a non-adaptive data-independent mapping of high-dimensional datasets into a lower dimensional space. Randomization and probabilistic techniques have become fundamental tools in modern data science and machine learning for analyzing large-scale datasets. The need to process large-scale datasets by memory and computation efficient algorithms arises in all fields of science and engineering. With the growing scale and complexity of datasets in scientific disciplines, traditional data analysis methods are no longer practical to extract meaningful information and patterns. TITLE: New Directions in Randomized Dimension Reduction for Modern Data Analysis SPEAKER: Farhad Pourkamali-Anaraki Postdoctoral Research Associate of Applied Mathematics at the University of Colorado-Boulder

cu boulder dlc collaboratory

May 1, Anna Broido (CU-Boulder), "Scale-free networks are rare".Apr 24, Tracy Babb (CU-Boulder), Paper presentation of "Practical sketching algorithms for low-rank matrix approximation".Apr 17, NO TALK (New Stat Major Open House at our usual time (3:30 PM Newton Lab)).Apr 10, Nathaniel Mathews (CU-Boulder), Discussion of "Constrained Global Optimization of Expensive Black Box Functions Using Radial Basis Functions".Apr 3, Jean-Gabriel Young (Universite Laval), "Network archeology: phase transition in the recoverability of network history".Mar 21, Luca Trevisan (Berkeley), Bonus talk: "A Theory of Spectral Clustering".Mar 20, Ali Mousavi (Rice), "Data-Driven Computational Sensing".Mar 13, Mark Bun (Princeton), "Finding Structure in the Landscape of Differential Privacy".Mar 6, Antonio Blanca (Georgia Tech), "Efficient Sampling for Probabilistic Models".Feb 27, Michael Hughes (Harvard), "Discovering Disease Subtypes that Improve Treatment Predictions: Interpretable Machine Learning for Personalized Medicine".Feb 22, Genevieve Patterson (Microsoft Research), "Uncommon Sense: Using Neural Networks for Exploration and Creativity".Feb 13, Peter Shaffery (CU-Boulder), Presenting Simmons, Nelson, and Simonsohn's 2011 article "False Positive Psychology".Feb 6, Dan Zhang (CU-Boulder), Some Recent Results on Linear Programming Based Approximate Dynamic Programming.Jan 30, David Kozak (CO School of Mines), "Global Convergence of Online Limited Memory BFGS".Jan 23, Farhad Pourkamali-Anaraki (CU-Boulder), "New Directions in Randomized Dimension Reduction for Modern Data Analysis".








Cu boulder dlc collaboratory