Close

Ali Zindari

علی زینداری

ML Researcher

Download Resume

About Me

I’m a Master's student in Mathematics and Computer Science at Saarland University. I'm also a researcher at CISPA working at MLO Lab with Dr. Sebastian Stich. My main focus nowadays is on Optimization Theory (centralized, decentralized).
Also check my Latent CV from here :))

Research Interests

My research interests include but are not limited to the following areas:

  • Representation Learning
  • Optimization

Currently my main area of research is distributed optimization. I'm trying to understand the effectiveness of distributed methods such as Local SGD theoretically which is not yet well reflected in the existing upper bounds. In addition, I work on development of a new method for solving games with several players in a distributed way which is provably communication efficient compared to SGD. Moreover, I'm willing to conduct research on the theory of deep learning and representation learning in the future to answer questions like: Why contrastive representation learning works well? How can we adapt a neural network to a new task with minimum number of samples? Which samples lead to a good generalization?

Collaboration

I'm always open to meet new people and chatting about new topics in ML. Please let me know if you are interested in working on a project with me. You can simply send me an email. Here is a list of people whom I had the chance of working/collaborating with:

Dr. Sebastian U. Stich (EPFL to CISPA), Dr. Tatjana Chavdarova (UC Berkeley to CISPA), Kumar Kshitij Patel (TTIC), Ruichen Luo (Zhejiang to ISTA), Dr. Ali Ramezani-Kebrya (EPFL to Oslo), Dr. Reza Shokri (NUS), Dr. Alexandre Alahi (EPFL), Yuejiang Liu (EPFL), MohammadHossein Bahari (EPFL), Alireza Parchami (UdS, Max Planck), Dr. Shadrokh Samavi (McMaster, Michigan), Dr. Pejman Khadivi (Seattle), Dr. Roshanak Roshandel (Amazon), Dr. Nader Karimi (IUT), Zahra Nabizadeh Shahrebabak (IUT), Parham Yazdkhasti (CISPA)

Experience

CISPA Helmholtz Center, MLO Lab

Research Assistant

Currently working on distributed optimization.

EPFL, LIONS Lab

Research Assistant

Worked on adversarial attacks on self-supervised models.

EPFL, Vita Lab

Research Assistant

Worked on the problem of off-road prediction in self-driving cars. I leveraged a contrastive representation learning approach for this problem to improve the latent space and I reduced the off-road prediction rate about 12%.

Dorsa Company

Computer Vision Engineer

Developed an algorithm for scanning the cross-section of metallic products and made a 3D visualization of them.

Isfahan University of Technology

Research Assistant

Proposed a bifurcated neural network for segmentation of COVID-19 infected regions in CT images. I also developed a method based on Pix2Pix conditional GAN to generate synthetic data with the goal of data augmentation.

Isfahan University of Technology

Network Engineer

Worked on a cloud computing platform called “Open Stack” to deploy it at the information center of university.

Sitco Company

Software Engineer

Worked as a software engineer at Sitco company to develop an accounting software.

Education

Saarland University

April 2023 - Present

Master of Mathematics and Computer Science

Isfahan University of Technology

Sept 2017 - Sept 2022

Bachelor of Computer Engineering

Papers

The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication

In this paper, we provide new lower bounds for local SGD under existing first-order data heterogeneity assumptions, showing that these assumptions are insufficient to prove the effectiveness of local update steps.

View Paper

On the Convergence of Local SGD Under Third-Order Smoothness and Hessian Similarity

However, there is a gap between the existing convergence rates for Local SGD and its observed performance on real-world problems. It seems that current rates do not correctly capture the effectiveness Local SGD. We first show that the existing rates for Local SGD in a heterogeneous setting cannot recover the correct rate when the global function is quadratic. Then we first derive a new rate for the case that the global function is a general strongly convex function depending on third-order smoothness and Hessian similarity.

View Paper

Segmentation of Lungs COVID Infected Regions by Attention Mechanism and Synthetic Generated Data

This research proposes a method for segmenting infected lung regions in a CT image. For this purpose, a convolutional neural network with an attention mechanism is used to detect infected areas with complex patterns. Attention blocks improve the segmentation accuracy by focusing on informative parts of the image. Furthermore, a generative adversarial network is used to generate synthetic images for data augmentation and expansion of small available datasets. Experimental results show the superiority of the proposed method compared to some existing procedures.

View Paper

Bifurcated Autoencoder for Segmentation of COVID-19 Infected Regions in CT Images

This paper proposes an approach to segment lung regions infected by COVID-19 to help cardiologists diagnose the disease more accurately, faster, and more manageable. We propose a bifurcated 2-D model for two types of segmentation. This model uses a shared encoder and a bifurcated connection to two separate decoders. One decoder is for segmenta-tion of the healthy region of the lungs, while the other is for the segmentation of the infected regions. Experiments on publically available images show that the bifurcated structure segments infected regions of the lungs better than state of the art.

View Paper

Blogs

Convergence Rate of First-Order Methods for Saddle Point Problems

In this project, we aim to provide convergence rates for different first-order methods used for min-max optimization problems. We considered the methods: Gradient Descent Ascent (GDA), Proximal Point (PP) and Extra Gradient Descent (EG).

View Blog

Variance Reduction Fails, Momentum Saves!!!

In this project, we aim to show the effect of vari- ance reduction methods in convex and non-convex optimization problems and we show how they can be better than pure stochas- tic methods. Moreover, we also show that variance reduction methods are not always the best and they may stick in local minimums when we are in a non-convex regime. We also propose a solution using momentum to overcome this problem. In the end, we present some experiments of the discussed methods when using a neural network for prediction.

View Blog

Convergence of SGD

SGD is the most popular algorithm for optimization in Deep Learning and Machine Learning due to its computation efficiency. In this blog, I provide the convergence proof of this algorithm with different assumptions.

View Blog

وبلاگ های فارسی

صنعتی اصفهان بزرگترین اشتباه زندگی من

در این وبلاگ قصد دارم راجع به بدترین تجربه و انتخاب زندگیم که انتخاب مهندسی کامپیوتر صنعتی اصفهان باشه صحبت کنم. قصد دارم که خیلی جزیی و موردی ثابت کنم که چرا این دانشگاه و به خصوص دانشکده کامپیوتر یک فاجعه ی به معنای واقعیه. چیز هایی که من مینویسم راجع به مهندسی کامپیوتر این دانشگاه هست اکثرا که مطمعنم خیلیاش رو میشه به بقیه ی رشته ها تعمیم داد. ولی خب کامپیوتر صنعتی با اختلاف بدتر از بقیه ی رشته هاست. و شما ممکنه در بعضی موارد نظر مخالفی نسبت به من داشته باشید اگه رشتتون کامپیوتر نبوده.

View Blog

Skills

Some facts about me which you may don't care

What I DON't like (maybe hate)

What I really like (sorted randomly)