**Title: Stochastic Three Points Method For Unconstrained Smooth Minimization
**

**Date and Place: 9th April 2021 15:00 – link
**

**Speaker: Dr El Hourcine Bergou (INRAE)**

**Abstract: **

In this work, we consider the unconstrained minimization problem of a smooth function in a setting where only function evaluations are possible. We design a novel randomized derivative-free algorithm—the stochastic three points (STP) method—and analyze its iteration complexity. At each iteration, STP generates a random search direction according to a certain fixed probability law. Our assumptions on this law are very mild: roughly speaking, all laws which do not concentrate all measure on any halfspace passing through the origin will work. Although, our approach is designed to not explicitly use derivatives, it covers some first order methods. For instance, if the probability law is chosen to be the Dirac distribution concentrated on the sign of the gradient, then STP recovers the Signed Gradient Descent method. If the probability law is the uniform distribution on the coordinates of the gradient, then STP recovers the Randomized Coordinate Descent Method.

The complexity of STP depends on the probability law via a simple characteristic closely related to the cosine measure which is used in the analysis of deterministic direct search (DDS) methods. Unlike in DDS, where $O(n)$ ($n$ is the dimension of the problem) function evaluations must be performed in each iteration in the worst case, our method only requires two new function evaluations per iteration. Consequently, while the complexity of DDS depends quadratically on $n$, our method depends linearly on $n$.

**Biography:**

Dr l Hourcine Bergou is a research scientist at INRAE. My research interests are in all areas that intersect with optimization, including algorithms, machine learning, statistics, and operations research. I am particularly interested in algorithms for large scale optimization including randomized and distributed optimization methods.