default search action
Dan Alistarh
Person information
- affiliation: IST Austria, Klosterneuburg, Austria
- affiliation (former): MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j36]Dan Alistarh:
Distributed Computing Column 87 Distributed and Algorithmic Thinking across Domains. SIGACT News 55(2) (2024) - [j35]Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh:
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization. Trans. Mach. Learn. Res. 2024 (2024) - [c123]Rustem Islamov, Mher Safaryan, Dan Alistarh:
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms. AISTATS 2024: 649-657 - [c122]Hossein Zakerinia, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh:
Communication-Efficient Federated Learning With Data and Client Heterogeneity. AISTATS 2024: 3448-3456 - [c121]Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh:
Federated SGD with Local Asynchrony. ICDCS 2024: 857-868 - [c120]Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, Dan Alistarh:
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. ICLR 2024 - [c119]Elias Frantar, Carlos Riquelme Ruiz, Neil Houlsby, Dan Alistarh, Utku Evci:
Scaling Laws for Sparsely-Connected Foundation Models. ICLR 2024 - [c118]Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh:
Extreme Compression of Large Language Models via Additive Quantization. ICML 2024 - [c117]Arshia Soltani Moakhar, Eugenia Iofinova, Elias Frantar, Dan Alistarh:
SPADE: Sparsity-Guided Debugging for Deep Neural Networks. ICML 2024 - [c116]Ionut-Vlad Modoranu, Aleksei Kalinov, Eldar Kurtic, Elias Frantar, Dan Alistarh:
Error Feedback Can Accurately Compress Preconditioners. ICML 2024 - [c115]Mahdi Nikdan, Soroush Tabesh, Elvir Crncevic, Dan Alistarh:
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation. ICML 2024 - [c114]Ilya Kokorin, Victor Yudov, Vitaly Aksenov, Dan Alistarh:
Wait-free Trees with Asymptotically-Efficient Range Queries. IPDPS 2024: 169-179 - [c113]Elias Frantar, Dan Alistarh:
QMoE: Sub-1-Bit Compression of Trillion Parameter Models. MLSys 2024 - [c112]Ilia Markov, Kaveh Alim, Elias Frantar, Dan Alistarh:
L-GreCo: Layerwise-adaptive Gradient Compression For Efficient Data-parallel Deep Learning. MLSys 2024 - [c111]Dan Alistarh, Krishnendu Chatterjee, Mehrdad Karrabi, John Lazarsfeld:
Game Dynamics and Equilibrium Computation in the Population Protocol Model. PODC 2024: 40-49 - [e2]Dan Alistarh:
38th International Symposium on Distributed Computing, DISC 2024, October 28 to November 1, 2024, Madrid, Spain. LIPIcs 319, Schloss Dagstuhl - Leibniz-Zentrum für Informatik 2024, ISBN 978-3-95977-352-2 [contents] - [i111]Mahdi Nikdan, Soroush Tabesh, Dan Alistarh:
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation. CoRR abs/2401.04679 (2024) - [i110]Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh:
Extreme Compression of Large Language Models via Additive Quantization. CoRR abs/2401.06118 (2024) - [i109]Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L. Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, James Hensman:
QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs. CoRR abs/2404.00456 (2024) - [i108]Aniruddha Nrusimha, Mayank Mishra, Naigang Wang, Dan Alistarh, Rameswar Panda, Yoon Kim:
Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization. CoRR abs/2404.03605 (2024) - [i107]Abhinav Agarwalla, Abhay Gupta, Alexandre Marques, Shubhra Pandit, Michael Goin, Eldar Kurtic, Kevin Leong, Tuan Nguyen, Mahmoud Salem, Dan Alistarh, Sean Lie, Mark Kurtz:
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment. CoRR abs/2405.03594 (2024) - [i106]Vladimir Malinovskii, Denis Mazur, Ivan Ilin, Denis Kuznedelev, Konstantin Burlachenko, Kai Yi, Dan Alistarh, Peter Richtárik:
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression. CoRR abs/2405.14852 (2024) - [i105]Ionut-Vlad Modoranu, Mher Safaryan, Grigory Malinovsky, Eldar Kurtic, Thomas Robert, Peter Richtárik, Dan Alistarh:
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence. CoRR abs/2405.15593 (2024) - [i104]Shashata Sawmya, Linghao Kong, Ilia Markov, Dan Alistarh, Nir Shavit:
Sparse Expansion and Neuronal Disentanglement. CoRR abs/2405.15756 (2024) - [i103]Eldar Kurtic, Amir Moeini, Dan Alistarh:
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models. CoRR abs/2406.12572 (2024) - [i102]Armand Nicolicioiu, Eugenia Iofinova, Eldar Kurtic, Mahdi Nikdan, Andrei Panferov, Ilia Markov, Nir Shavit, Dan Alistarh:
Panza: A Personalized Text Writing Assistant via Data Playback and Local Fine-Tuning. CoRR abs/2407.10994 (2024) - [i101]Elias Frantar, Roberto L. Castro, Jiale Chen, Torsten Hoefler, Dan Alistarh:
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models. CoRR abs/2408.11743 (2024) - [i100]Diyuan Wu, Ionut-Vlad Modoranu, Mher Safaryan, Denis Kuznedelev, Dan Alistarh:
The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information. CoRR abs/2408.17163 (2024) - [i99]Vage Egiazarian, Denis Kuznedelev, Anton Voronov, Ruslan Svirschevski, Michael Goin, Daniil Pavlov, Dan Alistarh, Dmitry Baranchuk:
Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization. CoRR abs/2409.00492 (2024) - 2023
- [j34]Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami:
The splay-list: a distribution-adaptive concurrent skip-list. Distributed Comput. 36(3): 395-418 (2023) - [j33]Nikita Koval, Dmitry Khalanskiy, Dan Alistarh:
CQS: A Formally-Verified Framework for Fair and Abortable Synchronization. Proc. ACM Program. Lang. 7(PLDI): 244-266 (2023) - [j32]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Why Extension-Based Proofs Fail. SIAM J. Comput. 52(4): 913-944 (2023) - [j31]Dan Alistarh:
Distributed Computing Column 86: A Summary of PODC 2022. SIGACT News 54(1): 105 (2023) - [j30]Dan Alistarh, Alkida Balliu, Dimitrios Los, Sean Ovens:
A Brief Summary of PODC 2022. SIGACT News 54(1): 106-112 (2023) - [j29]Dan Alistarh:
Distributed Computing Column 87 Recent Advances in Multi-Pass Graph Streaming Lower Bounds. SIGACT News 54(3): 46-47 (2023) - [j28]Dan Alistarh:
Distributed Computing Column 86 The Environmental Cost of Our Conferences. SIGACT News 54(4): 92-93 (2023) - [j27]Dan Alistarh, Faith Ellen, Joel Rybicki:
Wait-free approximate agreement on graphs. Theor. Comput. Sci. 948: 113733 (2023) - [c110]Nikita Koval, Alexander Fedorov, Maria Sokolova, Dmitry Tsitelov, Dan Alistarh:
Lincheck: A Practical Framework for Testing Concurrent Data Structures on JVM. CAV (1) 2023: 156-169 - [c109]Eugenia Iofinova, Alexandra Peste, Dan Alistarh:
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures. CVPR 2023: 24364-24373 - [c108]Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh:
OPTQ: Accurate Quantization for Generative Pre-trained Transformers. ICLR 2023 - [c107]Alexandra Peste, Adrian Vladu, Eldar Kurtic, Christoph H. Lampert, Dan Alistarh:
CrAM: A Compression-Aware Minimizer. ICLR 2023 - [c106]Elias Frantar, Dan Alistarh:
SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot. ICML 2023: 10323-10337 - [c105]Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh:
Quantized Distributed Training of Large Models with Convergence Guarantees. ICML 2023: 24020-24044 - [c104]Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge. ICML 2023: 26215-26227 - [c103]Eldar Kurtic, Elias Frantar, Dan Alistarh:
ZipLM: Inference-Aware Structured Pruning of Language Models. NeurIPS 2023 - [c102]Denis Kuznedelev, Eldar Kurtic, Elias Frantar, Dan Alistarh:
CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models. NeurIPS 2023 - [c101]Mher Safaryan, Alexandra Peste, Dan Alistarh:
Knowledge Distillation Performs Partial Variance Reduction. NeurIPS 2023 - [c100]Nikita Koval, Dan Alistarh, Roman Elizarov:
Fast and Scalable Channels in Kotlin Coroutines. PPoPP 2023: 107-118 - [c99]Alexander Fedorov, Diba Hashemi, Giorgi Nadiradze, Dan Alistarh:
Provably-Efficient and Internally-Deterministic Parallel Union-Find. SPAA 2023: 261-271 - [i98]Elias Frantar, Dan Alistarh:
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot. CoRR abs/2301.00774 (2023) - [i97]Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh:
Quantized Distributed Training of Large Models with Convergence Guarantees. CoRR abs/2302.02390 (2023) - [i96]Eldar Kurtic, Elias Frantar, Dan Alistarh:
ZipLM: Hardware-Aware Structured Pruning of Language Models. CoRR abs/2302.04089 (2023) - [i95]Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks. CoRR abs/2302.04852 (2023) - [i94]Denis Kuznedelev, Soroush Tabesh, Kimia Noorbakhsh, Elias Frantar, Sara Beery, Eldar Kurtic, Dan Alistarh:
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression. CoRR abs/2303.14409 (2023) - [i93]Alexander Fedorov, Diba Hashemi, Giorgi Nadiradze, Dan Alistarh:
Provably-Efficient and Internally-Deterministic Parallel Union-Find. CoRR abs/2304.09331 (2023) - [i92]Eugenia Iofinova, Alexandra Peste, Dan Alistarh:
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures. CoRR abs/2304.12622 (2023) - [i91]Mher Safaryan, Alexandra Peste, Dan Alistarh:
Knowledge Distillation Performs Partial Variance Reduction. CoRR abs/2305.17581 (2023) - [i90]Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, Dan Alistarh:
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. CoRR abs/2306.03078 (2023) - [i89]Ionut-Vlad Modoranu, Aleksei Kalinov, Eldar Kurtic, Dan Alistarh:
Error Feedback Can Accurately Compress Preconditioners. CoRR abs/2306.06098 (2023) - [i88]John Lazarsfeld, Dan Alistarh:
Decentralized Learning Dynamics in the Gossip Model. CoRR abs/2306.08670 (2023) - [i87]Tommaso Pegolotti, Elias Frantar, Dan Alistarh, Markus Püschel:
QIGen: Generating Efficient Kernels for Quantized Inference on Large Language Models. CoRR abs/2307.03738 (2023) - [i86]Dan Alistarh, Krishnendu Chatterjee, Mehrdad Karrabi, John Lazarsfeld:
Repeated Game Dynamics in Population Protocols. CoRR abs/2307.07297 (2023) - [i85]Denis Kuznedelev, Eldar Kurtic, Eugenia Iofinova, Elias Frantar, Alexandra Peste, Dan Alistarh:
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization. CoRR abs/2308.02060 (2023) - [i84]Elias Frantar, Carlos Riquelme, Neil Houlsby, Dan Alistarh, Utku Evci:
Scaling Laws for Sparsely-Connected Foundation Models. CoRR abs/2309.08520 (2023) - [i83]Arshia Soltani Moakhar, Eugenia Iofinova, Dan Alistarh:
SPADE: Sparsity-Guided Debugging for Deep Neural Networks. CoRR abs/2310.04519 (2023) - [i82]Ilya Kokorin, Dan Alistarh, Vitaly Aksenov:
Wait-free Trees with Asymptotically-Efficient Range Queries. CoRR abs/2310.05293 (2023) - [i81]Alexander Slastin, Dan Alistarh, Vitaly Aksenov:
Efficient Self-Adjusting Search Trees via Lazy Updates. CoRR abs/2310.05298 (2023) - [i80]Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan Alistarh:
Sparse Fine-tuning for Inference Acceleration of Large Language Models. CoRR abs/2310.06927 (2023) - [i79]Saleh Ashkboos, Ilia Markov, Elias Frantar, Tingxuan Zhong, Xincheng Wang, Jie Ren, Torsten Hoefler, Dan Alistarh:
Towards End-to-end 4-Bit Inference on Generative Large Language Models. CoRR abs/2310.09259 (2023) - [i78]Elias Frantar, Dan Alistarh:
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models. CoRR abs/2310.16795 (2023) - [i77]Rustem Islamov, Mher Safaryan, Dan Alistarh:
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms. CoRR abs/2310.20452 (2023) - [i76]Paniz Halvachi, Alexandra Peste, Dan Alistarh, Christoph H. Lampert:
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment. CoRR abs/2312.06872 (2023) - [i75]Eldar Kurtic, Torsten Hoefler, Dan Alistarh:
How to Prune Your Language Model: Recovering Accuracy on the "Sparsity May Cry" Benchmark. CoRR abs/2312.13547 (2023) - 2022
- [j26]Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. Algorithmica 84(4): 1007-1029 (2022) - [j25]Dan Alistarh:
Distributed Computing Column 85 Elastic Consistency: A Consistency Criterion for Distributed Optimization. SIGACT News 53(2): 63 (2022) - [j24]Dan Alistarh, Ilia Markov, Giorgi Nadiradze:
Elastic Consistency: A Consistency Criterion for Distributed Optimization. SIGACT News 53(2): 64-82 (2022) - [c98]Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh:
How Well Do Sparse ImageNet Models Transfer? CVPR 2022: 12256-12266 - [c97]Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, Dan Alistarh:
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models. EMNLP 2022: 4163-4181 - [c96]Elias Frantar, Dan Alistarh:
SPDY: Accurate Pruning with Speedup Guarantees. ICML 2022: 6726-6743 - [c95]Ilia Markov, Hamidreza Ramezani-Kebrya, Dan Alistarh:
CGX: adaptive system support for communication-efficient deep learning. Middleware 2022: 241-254 - [c94]Elias Frantar, Dan Alistarh:
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning. NeurIPS 2022 - [c93]Dan Alistarh, Joel Rybicki, Sasha Voitovych:
Near-Optimal Leader Election in Population Protocols on Graphs. PODC 2022: 246-256 - [c92]Anastasiia Postnikova, Nikita Koval, Giorgi Nadiradze, Dan Alistarh:
Multi-queues can be state-of-the-art priority schedulers. PPoPP 2022: 353-367 - [c91]Trevor Brown, William Sigouin, Dan Alistarh:
PathCAS: an efficient middle ground for concurrent search data structures. PPoPP 2022: 385-399 - [i74]Elias Frantar, Dan Alistarh:
SPDY: Accurate Pruning with Speedup Guarantees. CoRR abs/2201.13096 (2022) - [i73]Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh:
Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD. CoRR abs/2203.06638 (2022) - [i72]Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, Dan Alistarh:
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models. CoRR abs/2203.07259 (2022) - [i71]Dan Alistarh, Joel Rybicki, Sasha Voitovych:
Near-Optimal Leader Election in Population Protocols on Graphs. CoRR abs/2205.12597 (2022) - [i70]Hossein Zakerinia, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh:
QuAFL: Federated Averaging Can Be Both Asynchronous and Communication-Efficient. CoRR abs/2206.10032 (2022) - [i69]Alexandra Peste, Adrian Vladu, Dan Alistarh, Christoph H. Lampert:
CrAM: A Compression-Aware Minimizer. CoRR abs/2207.14200 (2022) - [i68]Elias Frantar, Dan Alistarh:
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning. CoRR abs/2208.11580 (2022) - [i67]Eldar Kurtic, Dan Alistarh:
GMP*: Well-Tuned Global Magnitude Pruning Can Outperform Most BERT-Pruning Methods. CoRR abs/2210.06384 (2022) - [i66]Shayan Talaei, Giorgi Nadiradze, Dan Alistarh:
Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers Can Be Jointly Leveraged For Faster Convergence. CoRR abs/2210.07703 (2022) - [i65]Denis Kuznedelev, Eldar Kurtic, Elias Frantar, Dan Alistarh:
oViT: An Accurate Second-Order Pruning Framework for Vision Transformers. CoRR abs/2210.09223 (2022) - [i64]Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh:
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers. CoRR abs/2210.17323 (2022) - [i63]Mohammadreza Alimohammadi, Ilia Markov, Elias Frantar, Dan Alistarh:
L-GreCo: An Efficient and General Framework for Layerwise-Adaptive Gradient Compression. CoRR abs/2210.17357 (2022) - [i62]Nikita Koval, Dan Alistarh, Roman Elizarov:
Fast and Scalable Channels in Kotlin Coroutines. CoRR abs/2211.04986 (2022) - [i61]Trevor Brown, William Sigouin, Dan Alistarh:
PathCAS: An Efficient Middle Ground for Concurrent Search Data Structures. CoRR abs/2212.09851 (2022) - 2021
- [j23]Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy:
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization. J. Mach. Learn. Res. 22: 114:1-114:43 (2021) - [j22]Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste:
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22: 241:1-241:124 (2021) - [j21]Dan Alistarh:
Distributed Computing Column 81: Byzantine Agreement with Less Communication: Recent Advances. SIGACT News 52(1): 70 (2021) - [j20]Dan Alistarh:
Distributed Computing Column 82 Distributed Computability: A Few Results Masters Students Should Know. SIGACT News 52(2): 91 (2021) - [j19]Dan Alistarh:
Distributed Computing Column 83 Five Ways Not To Fool Yourself: Designing Experiments for Understanding Performance. SIGACT News 52(3): 60 (2021) - [j18]Dan Alistarh:
Distributed Computing Column 84: Perspectives on the Paper "CCS Expressions, Finite State Processes, and Three Problems of Equivalence". SIGACT News 52(4): 74-75 (2021) - [j17]Shigang Li, Tal Ben-Nun, Giorgi Nadiradze, Salvatore Di Girolamo, Nikoli Dryden, Dan Alistarh, Torsten Hoefler:
Breaking (Global) Barriers in Parallel Stochastic Optimization With Wait-Avoiding Group Averaging. IEEE Trans. Parallel Distributed Syst. 32(7): 1725-1739 (2021) - [c90]Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh:
Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees. AAAI 2021: 8209-8216 - [c89]Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh:
Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent. AAAI 2021: 9037-9045 - [c88]Zeyuan Allen-Zhu, Faeze Ebrahimianghazani, Jerry Li, Dan Alistarh:
Byzantine-Resilient Non-Convex Stochastic Gradient Descent. ICLR 2021 - [c87]Peter Davies, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, Dan Alistarh:
New Bounds For Distributed Mean Estimation and Variance Reduction. ICLR 2021 - [c86]Foivos Alimisis, Peter Davies, Dan Alistarh:
Communication-Efficient Distributed Optimization with Quantized Preconditioners. ICML 2021: 196-206 - [c85]Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh:
Distributed Principal Component Analysis with Limited Communication. NeurIPS 2021: 2823-2834 - [c84]Giorgi Nadiradze, Amirmojtaba Sabour, Peter Davies, Shigang Li, Dan Alistarh:
Asynchronous Decentralized SGD with Quantized and Local Updates. NeurIPS 2021: 6829-6842 - [c83]Janne H. Korhonen, Dan Alistarh:
Towards Tight Communication Lower Bounds for Distributed Optimisation. NeurIPS 2021: 7254-7266 - [c82]Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh:
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks. NeurIPS 2021: 8557-8570 - [c81]Elias Frantar, Eldar Kurtic, Dan Alistarh:
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information. NeurIPS 2021: 14873-14886 - [c80]Dan Alistarh, Rati Gelashvili, Joel Rybicki:
Fast Graphical Population Protocols. OPODIS 2021: 14:1-14:18 - [c79]Dan Alistarh, Martin Töpfer, Przemyslaw Uznanski:
Comparison Dynamics in Population Protocols. PODC 2021: 55-65 - [c78]Dan Alistarh, Peter Davies:
Collecting Coupons is Faster with Friends. SIROCCO 2021: 3-12 - [c77]Dan Alistarh, Faith Ellen, Joel Rybicki:
Wait-Free Approximate Agreement on Graphs. SIROCCO 2021: 87-105 - [c76]Alexander Fedorov, Nikita Koval, Dan Alistarh:
A Scalable Concurrent Algorithm for Dynamic Connectivity. SPAA 2021: 208-220 - [c75]Dan Alistarh, Rati Gelashvili, Giorgi Nadiradze:
Lower Bounds for Shared-Memory Leader Election Under Bounded Write Contention. DISC 2021: 4:1-4:17 - [c74]Dan Alistarh, Rati Gelashvili, Joel Rybicki:
Brief Announcement: Fast Graphical Population Protocols. DISC 2021: 43:1-43:4 - [i60]Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste:
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. CoRR abs/2102.00554 (2021) - [i59]Foivos Alimisis, Peter Davies, Dan Alistarh:
Communication-Efficient Distributed Optimization with Quantized Preconditioners. CoRR abs/2102.07214 (2021) - [i58]Dan Alistarh, Rati Gelashvili, Joel Rybicki:
Fast Graphical Population Protocols. CoRR abs/2102.08808 (2021) - [i57]Dan Alistarh, Faith Ellen, Joel Rybicki:
Wait-free approximate agreement on graphs. CoRR abs/2103.08949 (2021) - [i56]Alexander Fedorov, Nikita Koval, Dan Alistarh:
A Scalable Concurrent Algorithm for Dynamic Connectivity. CoRR abs/2105.08098 (2021) - [i55]Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh:
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks. CoRR abs/2106.12379 (2021) - [i54]Elias Frantar, Eldar Kurtic, Dan Alistarh:
Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization. CoRR abs/2107.03356 (2021) - [i53]Alexandra Peste, Dan Alistarh, Christoph H. Lampert:
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models. CoRR abs/2107.03860 (2021) - [i52]Dan Alistarh, Rati Gelashvili, Giorgi Nadiradze:
Lower Bounds for Shared-Memory Leader Election under Bounded Write Contention. CoRR abs/2108.02802 (2021) - [i51]Anastasiia Postnikova, Nikita Koval, Giorgi Nadiradze, Dan Alistarh:
Multi-Queues Can Be State-of-the-Art Priority Schedulers. CoRR abs/2109.00657 (2021) - [i50]Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh:
Distributed Principal Component Analysis with Limited Communication. CoRR abs/2110.14391 (2021) - [i49]Ilia Markov, Hamidreza Ramezani-Kebrya, Dan Alistarh:
Project CGX: Scalable Deep Learning on Commodity GPUs. CoRR abs/2111.08617 (2021) - [i48]Nikita Koval, Dmitry Khalanskiy, Dan Alistarh:
A Formally-Verified Framework for Fair Synchronization in Kotlin Coroutines. CoRR abs/2111.12682 (2021) - [i47]Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh:
How Well Do Sparse Imagenet Models Transfer? CoRR abs/2111.13445 (2021) - [i46]Dan Alistarh, Peter Davies:
Collecting Coupons is Faster with Friends. CoRR abs/2112.05830 (2021) - 2020
- [j16]Dan Alistarh:
Distributed Computing Column 77 Consensus Dynamics: An Overview. SIGACT News 51(1): 57 (2020) - [j15]Dan Alistarh:
Distributed Computing Column 78: 60 Years of Mastering Concurrent Computing through Sequential Thinking. SIGACT News 51(2): 58 (2020) - [j14]Dan Alistarh:
Distributed Computing Column 79: Using Round Elimination to Understand Locality. SIGACT News 51(3): 62 (2020) - [j13]Dan Alistarh:
Distributed Computing Column 80: Annual Review 2020. SIGACT News 51(4): 73-74 (2020) - [j12]Nezihe Merve Gürel, Kaan Kara, Alen Stojanov, Tyler M. Smith, Thomas Lemmin, Dan Alistarh, Markus Püschel, Ce Zhang:
Compressive Sensing Using Iterative Hard Thresholding With Low Precision Data Representation: Theory and Applications. IEEE Trans. Signal Process. 68: 4268-4282 (2020) - [c73]Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. ICALP 2020: 7:1-7:16 - [c72]Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph Lampert:
On the Sample Complexity of Adversarial Multi-Source PAC Learning. ICML 2020: 5416-5425 - [c71]Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William M. Leiserson, Sage Moore, Nir Shavit, Dan Alistarh:
Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks. ICML 2020: 5533-5543 - [c70]Vitaly Aksenov, Dan Alistarh, Janne H. Korhonen:
Scalable Belief Propagation via Relaxed Scheduling. NeurIPS 2020 - [c69]Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya:
Adaptive Gradient Quantization for Data-Parallel SGD. NeurIPS 2020 - [c68]Sidak Pal Singh, Dan Alistarh:
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression. NeurIPS 2020 - [c67]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Brief Announcement: Why Extension-Based Proofs Fail. PODC 2020: 54-56 - [c66]Shigang Li, Tal Ben-Nun, Salvatore Di Girolamo, Dan Alistarh, Torsten Hoefler:
Taming unbalanced training workloads in deep learning with partial collective operations. PPoPP 2020: 45-61 - [c65]Trevor Brown, Aleksandar Prokopec, Dan Alistarh:
Non-blocking interpolation search trees with doubly-logarithmic running time. PPoPP 2020: 276-291 - [c64]Nikita Koval, Maria Sokolova, Alexander Fedorov, Dan Alistarh, Dmitry Tsitelov:
Testing concurrency on the JVM with lincheck. PPoPP 2020: 423-424 - [c63]Dan Alistarh, Trevor Brown, Nandini Singhal:
Memory Tagging: Minimalist Synchronization for Scalable Concurrent Data Structures. SPAA 2020: 37-49 - [c62]Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami:
The Splay-List: A Distribution-Adaptive Concurrent Skip-List. DISC 2020: 3:1-3:18 - [i45]Aleksandar Prokopec, Trevor Brown, Dan Alistarh:
Analysis and Evaluation of Non-Blocking Interpolation Search Trees. CoRR abs/2001.00413 (2020) - [i44]Dan Alistarh, Bapi Chatterjee, Vyacheslav Kungurtsev:
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent. CoRR abs/2001.05918 (2020) - [i43]Dan Alistarh, Saleh Ashkboos, Peter Davies:
Distributed Mean Estimation with Optimal Error Bounds. CoRR abs/2002.09268 (2020) - [i42]Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert:
On the Sample Complexity of Adversarial Multi-Source PAC Learning. CoRR abs/2002.10384 (2020) - [i41]Vitaly Aksenov, Dan Alistarh, Janne H. Korhonen:
Relaxed Scheduling for Scalable Belief Propagation. CoRR abs/2002.11505 (2020) - [i40]Dan Alistarh, Martin Töpfer, Przemyslaw Uznanski:
Robust Comparison in Population Protocols. CoRR abs/2003.06485 (2020) - [i39]Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. CoRR abs/2003.09297 (2020) - [i38]Dan Alistarh, Nikita Koval, Giorgi Nadiradze:
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers. CoRR abs/2003.09363 (2020) - [i37]Sidak Pal Singh, Dan Alistarh:
WoodFisher: Efficient second-order approximations for model compression. CoRR abs/2004.14340 (2020) - [i36]Shigang Li, Tal Ben-Nun, Dan Alistarh, Salvatore Di Girolamo, Nikoli Dryden, Torsten Hoefler:
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging. CoRR abs/2005.00124 (2020) - [i35]Vyacheslav Kungurtsev, Bapi Chatterjee, Dan Alistarh:
Stochastic Gradient Langevin with Delayed Gradients. CoRR abs/2006.07362 (2020) - [i34]Alex Shamis, Matthew Renzelmann, Stanko Novakovic, Georgios Chatzopoulos, Anders T. Gjerdrum, Dan Alistarh, Aleksandar Dragojevic, Dushyanth Narayanan, Miguel Castro:
Fast General Distributed Transactions with Opacity using Global Time. CoRR abs/2006.14346 (2020) - [i33]Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami:
The Splay-List: A Distribution-Adaptive Concurrent Skip-List. CoRR abs/2008.01009 (2020) - [i32]Dan Alistarh, Janne H. Korhonen:
Improved Communication Lower Bounds for Distributed Optimisation. CoRR abs/2010.08222 (2020) - [i31]Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya:
Adaptive Gradient Quantization for Data-Parallel SGD. CoRR abs/2010.12460 (2020) - [i30]Zeyuan Allen-Zhu, Faeze Ebrahimian, Jerry Li, Dan Alistarh:
Byzantine-Resilient Non-Convex Stochastic Gradient Descent. CoRR abs/2012.14368 (2020)
2010 – 2019
- 2019
- [j11]Dan Alistarh:
Distributed Computing Column 76: Annual Review 2019. SIGACT News 50(4): 31-32 (2019) - [c61]Nikita Koval, Dan Alistarh, Roman Elizarov:
Scalable FIFO Channels for Programming via Communicating Sequential Processes. Euro-Par 2019: 317-333 - [c60]Chen Yu, Hanlin Tang, Cédric Renggli, Simon Kassing, Ankit Singla, Dan Alistarh, Ce Zhang, Ji Liu:
Distributed Learning over Unreliable Networks. ICML 2019: 7202-7212 - [c59]Chris Wendler, Markus Püschel, Dan Alistarh:
Powerset Convolutional Neural Networks. NeurIPS 2019: 927-938 - [c58]Dan Alistarh, Alexander Fedorov, Nikita Koval:
In Search of the Fastest Concurrent Union-Find Algorithm. OPODIS 2019: 15:1-15:16 - [c57]Nikita Koval, Dan Alistarh, Roman Elizarov:
Lock-free channels for programming via communicating sequential processes: poster. PPoPP 2019: 417-418 - [c56]Cédric Renggli, Saleh Ashkboos, Mehdi Aghagolzadeh, Dan Alistarh, Torsten Hoefler:
SparCML: high-performance sparse communication for machine learning. SC 2019: 11:1-11:15 - [c55]Dan Alistarh, Giorgi Nadiradze, Nikita Koval:
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers. SPAA 2019: 145-154 - [c54]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Why extension-based proofs fail. STOC 2019: 986-996 - [i29]Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Eric S. Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros G. Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim M. Hazelwood, Furong Huang, Martin Jaggi, Kevin G. Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konecný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Gordon Murray, Dimitris S. Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Randall Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric P. Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar:
SysML: The New Frontier of Machine Learning Systems. CoRR abs/1904.03257 (2019) - [i28]Vitaly Aksenov, Dan Alistarh, Petr Kuznetsov:
Performance Prediction for Coarse-Grained Locking. CoRR abs/1904.11323 (2019) - [i27]Shigang Li, Tal Ben-Nun, Salvatore Di Girolamo, Dan Alistarh, Torsten Hoefler:
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations. CoRR abs/1908.04207 (2019) - [i26]Chris Wendler, Dan Alistarh, Markus Püschel:
Powerset Convolutional Neural Networks. CoRR abs/1909.02253 (2019) - [i25]Giorgi Nadiradze, Amirmojtaba Sabour, Aditya Sharma, Ilia Markov, Vitaly Aksenov, Dan Alistarh:
PopSGD: Decentralized Stochastic Gradient Descent in the Population Model. CoRR abs/1910.12308 (2019) - [i24]Dan Alistarh, Alexander Fedorov, Nikita Koval:
In Search of the Fastest Concurrent Union-Find Algorithm. CoRR abs/1911.06347 (2019) - 2018
- [j10]Dan Alistarh, Justin Kopinsky, Petr Kuznetsov, Srivatsan Ravi, Nir Shavit:
Inherent limitations of hybrid transactional memory. Distributed Comput. 31(3): 167-185 (2018) - [j9]Dan Alistarh, James Aspnes, Valerie King, Jared Saia:
Communication-efficient randomized consensus. Distributed Comput. 31(6): 489-501 (2018) - [j8]Dan Alistarh, Rati Gelashvili:
Recent Algorithmic Advances in Population Protocols. SIGACT News 49(3): 63-73 (2018) - [j7]Dan Alistarh, William M. Leiserson, Alexander Matveev, Nir Shavit:
ThreadScan: Automatic and Scalable Memory Reclamation. ACM Trans. Parallel Comput. 4(4): 18:1-18:18 (2018) - [c53]Sarit Khirirat, Mikael Johansson, Dan Alistarh:
Gradient compression for communication-limited convex optimization. CDC 2018: 166-171 - [c52]Demjan Grubic, Leo Tam, Dan Alistarh, Ce Zhang:
Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study. EDBT 2018: 145-156 - [c51]Antonio Polino, Razvan Pascanu, Dan Alistarh:
Model compression via distillation and quantization. ICLR (Poster) 2018 - [c50]Dan Alistarh, Zeyuan Allen-Zhu, Jerry Li:
Byzantine Stochastic Gradient Descent. NeurIPS 2018: 4618-4628 - [c49]Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, Cédric Renggli:
The Convergence of Sparsified Gradient Methods. NeurIPS 2018: 5977-5987 - [c48]Vitaly Aksenov, Dan Alistarh, Petr Kuznetsov:
Brief Announcement: Performance Prediction for Coarse-Grained Locking. PODC 2018: 411-413 - [c47]Dan Alistarh:
Session details: Session 1B: Shared Memory Theory. PODC 2018 - [c46]Dan Alistarh, Christopher De Sa, Nikola Konstantinov:
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory. PODC 2018: 169-178 - [c45]Dan Alistarh, Trevor Brown, Justin Kopinsky, Giorgi Nadiradze:
Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms. PODC 2018: 377-386 - [c44]Dan Alistarh:
A Brief Tutorial on Distributed and Concurrent Machine Learning. PODC 2018: 487-488 - [c43]Alen Stojanov, Tyler Michael Smith, Dan Alistarh, Markus Püschel:
Fast Quantized Arithmetic on x86: Trading Compute for Data Movement. SiPS 2018: 349-354 - [c42]Dan Alistarh, James Aspnes, Rati Gelashvili:
Space-Optimal Majority in Population Protocols. SODA 2018: 2221-2239 - [c41]Dan Alistarh, Trevor Brown, Justin Kopinsky, Jerry Zheng Li, Giorgi Nadiradze:
Distributionally Linearizable Data Structures. SPAA 2018: 133-142 - [c40]Dan Alistarh, Syed Kamran Haider, Raphael Kübler, Giorgi Nadiradze:
The Transactional Conflict Problem. SPAA 2018: 383-392 - [e1]Dan Alistarh, Alex Delis, George Pallis:
Algorithmic Aspects of Cloud Computing - Third International Workshop, ALGOCLOUD 2017, Vienna, Austria, September 5, 2017, Revised Selected Papers. Lecture Notes in Computer Science 10739, Springer 2018, ISBN 978-3-319-74874-0 [contents] - [i23]David Dao, Dan Alistarh, Claudiu Musat, Ce Zhang:
DataBright: Towards a Global Exchange for Decentralized Data Ownership and Trusted Computation. CoRR abs/1802.04780 (2018) - [i22]Nezihe Merve Gürel, Kaan Kara, Dan Alistarh, Ce Zhang:
Compressive Sensing with Low Precision Data Representation: Radio Astronomy and Beyond. CoRR abs/1802.04907 (2018) - [i21]Antonio Polino, Razvan Pascanu, Dan Alistarh:
Model compression via distillation and quantization. CoRR abs/1802.05668 (2018) - [i20]Cédric Renggli, Dan Alistarh, Torsten Hoefler:
SparCML: High-Performance Sparse Communication for Machine Learning. CoRR abs/1802.08021 (2018) - [i19]Dan Alistarh, Christopher De Sa, Nikola Konstantinov:
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory. CoRR abs/1803.08841 (2018) - [i18]Dan Alistarh, Zeyuan Allen-Zhu, Jerry Li:
Byzantine Stochastic Gradient Descent. CoRR abs/1803.08917 (2018) - [i17]Dan Alistarh, Syed Kamran Haider, Raphael Kübler, Giorgi Nadiradze:
The Transactional Conflict Problem. CoRR abs/1804.00947 (2018) - [i16]Dan Alistarh, Trevor Brown, Justin Kopinsky, Jerry Zheng Li, Giorgi Nadiradze:
Distributionally Linearizable Data Structures. CoRR abs/1804.01018 (2018) - [i15]Dan Alistarh, Trevor Brown, Justin Kopinsky, Giorgi Nadiradze:
Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms. CoRR abs/1808.04155 (2018) - [i14]Dan Alistarh, Torsten Hoefler, Mikael Johansson, Sarit Khirirat, Nikola Konstantinov, Cédric Renggli:
The Convergence of Sparsified Gradient Methods. CoRR abs/1809.10505 (2018) - [i13]Hanlin Tang, Chen Yu, Cédric Renggli, Simon Kassing, Ankit Singla, Dan Alistarh, Ji Liu, Ce Zhang:
Distributed Learning over Unreliable Networks. CoRR abs/1810.07766 (2018) - [i12]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Why Extension-Based Proofs Fail. CoRR abs/1811.01421 (2018) - 2017
- [j6]Syed Kamran Haider, William Hasenplaugh, Dan Alistarh:
Lease/Release: Architectural Support for Scaling Contended Data Structures. ACM Trans. Parallel Comput. 4(2): 8:1-8:25 (2017) - [c39]Ghufran Baig, Dan Alistarh, Thomas Karagiannis, Bozidar Radunovic, Matthew Balkwill, Lili Qiu:
Towards unlicensed cellular networks in TV white spaces. CoNEXT 2017: 2-14 - [c38]Dan Alistarh, Bartlomiej Dudek, Adrian Kosowski, David Soloveichik, Przemyslaw Uznanski:
Robust Detection in Leak-Prone Population Protocols. DNA 2017: 155-171 - [c37]Dan Alistarh, William M. Leiserson, Alexander Matveev, Nir Shavit:
Forkscan: Conservative Memory Reclamation for Modern Operating Systems. EuroSys 2017: 483-498 - [c36]Kaan Kara, Dan Alistarh, Gustavo Alonso, Onur Mutlu, Ce Zhang:
FPGA-Accelerated Dense Linear Machine Learning: A Precision-Convergence Trade-Off. FCCM 2017: 160-167 - [c35]Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang:
ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning. ICML 2017: 4035-4043 - [c34]Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic:
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. NIPS 2017: 1709-1720 - [c33]Dan Alistarh, Justin Kopinsky, Jerry Li, Giorgi Nadiradze:
The Power of Choice in Priority Scheduling. PODC 2017: 283-292 - [c32]Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, Ronald L. Rivest:
Time-Space Trade-offs in Population Protocols. SODA 2017: 2560-2579 - [i11]Dan Alistarh, James Aspnes, Rati Gelashvili:
Space-Optimal Majority in Population Protocols. CoRR abs/1704.04947 (2017) - [i10]Dan Alistarh, Justin Kopinsky, Jerry Li, Giorgi Nadiradze:
The Power of Choice in Priority Scheduling. CoRR abs/1706.04178 (2017) - [i9]Dan Alistarh, Bartlomiej Dudek, Adrian Kosowski, David Soloveichik, Przemyslaw Uznanski:
Robust Detection in Leak-Prone Population Protocols. CoRR abs/1706.09937 (2017) - 2016
- [j5]Dan Alistarh, Keren Censor-Hillel, Nir Shavit:
Are Lock-Free Concurrent Algorithms Practically Wait-Free? J. ACM 63(4): 31:1-31:20 (2016) - [c31]Syed Kamran Haider, William Hasenplaugh, Dan Alistarh:
Lease/release: architectural support for scaling contended data structures. PPoPP 2016: 17:1-17:12 - [i8]Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, Ronald L. Rivest:
Time-Space Trade-offs in Population Protocols. CoRR abs/1602.08032 (2016) - [i7]Dan Alistarh, Jerry Li, Ryota Tomioka, Milan Vojnovic:
QSGD: Randomized Quantization for Communication-Optimal Stochastic Gradient Descent. CoRR abs/1610.02132 (2016) - [i6]Hantian Zhang, Kaan Kara, Jerry Li, Dan Alistarh, Ji Liu, Ce Zhang:
ZipML: An End-to-end Bitwise Framework for Dense Generalized Linear Models. CoRR abs/1611.05402 (2016) - 2015
- [j4]Dan Alistarh:
The Renaming Problem: Recent Developments and Open Questions. Bull. EATCS 117 (2015) - [c30]Dan Alistarh, Rati Gelashvili:
Polylogarithmic-Time Leader Election in Population Protocols. ICALP (2) 2015: 479-491 - [c29]Dan Alistarh, Jennifer Iglesias, Milan Vojnovic:
Streaming Min-max Hypergraph Partitioning. NIPS 2015: 1900-1908 - [c28]Dan Alistarh, Rati Gelashvili, Milan Vojnovic:
Fast and Exact Majority in Population Protocols. PODC 2015: 47-56 - [c27]Dan Alistarh, Thomas Sauerwald, Milan Vojnovic:
Lock-Free Algorithms under Stochastic Schedulers. PODC 2015: 251-260 - [c26]Dan Alistarh, Rati Gelashvili, Adrian Vladu:
How To Elect a Leader Faster than a Tournament. PODC 2015: 365-374 - [c25]Dan Alistarh, Justin Kopinsky, Jerry Li, Nir Shavit:
The SprayList: a scalable relaxed priority queue. PPoPP 2015: 11-20 - [c24]Dan Alistarh, Hitesh Ballani, Paolo Costa, Adam C. Funnell, Joshua Benjamin, Philip M. Watts, Benn Thomsen:
A High-Radix, Low-Latency Optical Switch for Data Centers. SIGCOMM 2015: 367-368 - [c23]Dan Alistarh, William M. Leiserson, Alexander Matveev, Nir Shavit:
ThreadScan: Automatic and Scalable Memory Reclamation. SPAA 2015: 123-132 - [c22]Dan Alistarh, Justin Kopinsky, Petr Kuznetsov, Srivatsan Ravi, Nir Shavit:
Inherent Limitations of Hybrid Transactional Memory. DISC 2015: 185-199 - [i5]Dan Alistarh, Rati Gelashvili:
Polylogarithmic-Time Leader Election in Population Protocols Using Polylogarithmic States. CoRR abs/1502.05745 (2015) - 2014
- [j3]Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, Rachid Guerraoui:
Tight Bounds for Asynchronous Renaming. J. ACM 61(3): 18:1-18:51 (2014) - [c21]Dan Alistarh, Patrick Eugster, Maurice Herlihy, Alexander Matveev, Nir Shavit:
StackTrack: an automated transactional approach to concurrent memory reclamation. EuroSys 2014: 25:1-25:14 - [c20]Dan Alistarh, Justin Kopinsky, Alexander Matveev, Nir Shavit:
The LevelArray: A Fast, Practical Long-Lived Renaming Algorithm. ICDCS 2014: 348-357 - [c19]Dan Alistarh, Keren Censor-Hillel, Nir Shavit:
Brief announcement: are lock-free concurrent algorithms practically wait-free? PODC 2014: 50-52 - [c18]Dan Alistarh, Oksana Denysyuk, Luís E. T. Rodrigues, Nir Shavit:
Balls-into-leaves: sub-logarithmic renaming in synchronous message-passing systems. PODC 2014: 232-241 - [c17]Dan Alistarh, James Aspnes, Michael A. Bender, Rati Gelashvili, Seth Gilbert:
Dynamic Task Allocation in Asynchronous Shared Memory. SODA 2014: 416-435 - [c16]Dan Alistarh, Keren Censor-Hillel, Nir Shavit:
Are lock-free concurrent algorithms practically wait-free? STOC 2014: 714-723 - [c15]Dan Alistarh, James Aspnes, Valerie King, Jared Saia:
Communication-Efficient Randomized Consensus. DISC 2014: 61-75 - [p1]Dan Alistarh, Rachid Guerraoui:
Distributed Algorithms. Computing Handbook, 3rd ed. (1) 2014: 16: 1-16 - [i4]Dan Alistarh, Justin Kopinsky, Alexander Matveev, Nir Shavit:
The LevelArray: A Fast, Practical Long-Lived Renaming Algorithm. CoRR abs/1405.5461 (2014) - [i3]Dan Alistarh, Justin Kopinsky, Petr Kuznetsov, Srivatsan Ravi, Nir Shavit:
Inherent Limitations of Hybrid Transactional Memory. CoRR abs/1405.5689 (2014) - [i2]Dan Alistarh, Rati Gelashvili, Adrian Vladu:
How to Elect a Leader Faster than a Tournament. CoRR abs/1411.1001 (2014) - 2013
- [c14]Dan Alistarh, James Aspnes, George Giakkoupis, Philipp Woelfel:
Randomized loose renaming in O(log log n) time. PODC 2013: 200-209 - [i1]Dan Alistarh, Keren Censor-Hillel, Nir Shavit:
Are Lock-Free Concurrent Algorithms Practically Wait-Free? CoRR abs/1311.3200 (2013) - 2012
- [b1]Dan Alistarh:
Randomized versus Deterministic Implementations of Concurrent Data Structures. EPFL, Switzerland, 2012 - [j2]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
Of Choices, Failures and Asynchrony: The Many Faces of Set Agreement. Algorithmica 62(1-2): 595-629 (2012) - [j1]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
Generating Fast Indulgent Algorithms. Theory Comput. Syst. 51(4): 404-424 (2012) - [c13]Dan Alistarh, Michael A. Bender, Seth Gilbert, Rachid Guerraoui:
How to Allocate Tasks Asynchronously. FOCS 2012: 331-340 - [c12]Dan Alistarh, Hagit Attiya, Rachid Guerraoui, Corentin Travers:
Early Deciding Synchronous Renaming in O( logf ) Rounds or Less. SIROCCO 2012: 195-206 - [c11]Dan Alistarh, Rachid Guerraoui, Petr Kuznetsov, Giuliano Losa:
On the cost of composing shared-memory algorithms. SPAA 2012: 298-307 - 2011
- [c10]Dan Alistarh, James Aspnes, Seth Gilbert, Rachid Guerraoui:
The Complexity of Renaming. FOCS 2011: 718-727 - [c9]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
Generating Fast Indulgent Algorithms. ICDCN 2011: 41-52 - [c8]Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, Morteza Zadimoghaddam:
Optimal-time adaptive strong renaming, with applications to counting. PODC 2011: 239-248 - [c7]Dan Alistarh, James Aspnes:
Sub-logarithmic Test-and-Set against a Weak Adversary. DISC 2011: 97-109 - 2010
- [c6]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Morteza Zadimoghaddam:
How Efficient Can Gossip Be? (On the Cost of Resilient Information Exchange). ICALP (2) 2010: 115-126 - [c5]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Zarko Milosevic, Calvin C. Newport:
Securing every bit: authenticated broadcast in radio networks. SPAA 2010: 50-59 - [c4]Dan Alistarh, Hagit Attiya, Seth Gilbert, Andrei Giurgiu, Rachid Guerraoui:
Fast Randomized Test-and-Set and Renaming. DISC 2010: 94-108 - [c3]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
Brief Announcement: New Bounds for Partially Synchronous Set Agreement. DISC 2010: 404-405
2000 – 2009
- 2009
- [c2]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
Of Choices, Failures and Asynchrony: The Many Faces of Set Agreement. ISAAC 2009: 943-953 - 2008
- [c1]Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Corentin Travers:
How to Solve Consensus in the Smallest Window of Synchrony. DISC 2008: 32-46
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-25 20:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint