Here are
48 public repositories
matching this topic...
Making large AI models cheaper, faster and more accessible
-
Updated
May 30, 2024
-
Python
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
Updated
May 30, 2024
-
Python
Distributed Deep Learning, with a focus on distributed training, using Keras and Apache Spark.
-
Updated
Jul 25, 2018
-
Python
A state-of-the-art multithreading runtime: message-passing based, fast, scalable, ultra-low overhead
??大模型??套件,提供大?言模型、跨模?大模型、生物?算大模型等?域的全流程??工具?。
-
Updated
May 24, 2024
-
Python
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
-
Updated
May 12, 2024
-
Python
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
-
Updated
Mar 31, 2023
-
Python
Distributed Keras Engine, Make Keras faster with only one line of code.
-
Updated
Oct 3, 2019
-
Python
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
-
Updated
Nov 19, 2018
-
Python
Orkhon: ML Inference Framework and Server Runtime
Large scale 4D parallelism pre-training for ?? transformers in Mixture of Experts *(still work in progress)*
-
Updated
Dec 14, 2023
-
Python
Multi-GPU training for Keras
-
Updated
Jun 23, 2017
-
Python
Distributed training (multi-node) of a Transformer model
-
Updated
Apr 10, 2024
-
Python
SC23 Deep Learning at Scale Tutorial Material
-
Updated
Nov 13, 2023
-
Python
WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.
-
Updated
Aug 4, 2022
-
Python
?Implement of Parallel Matrix Multiplication Methods Using FOX Algorithm on Peking University's High-performance Computing System
This repository provides hands-on labs on PyTorch-based Distributed Training and SageMaker Distributed Training. It is written to make it easy for beginners to get started, and guides you through step-by-step modifications to the code based on the most basic BERT use cases.
-
Updated
Jul 18, 2023
-
Jupyter Notebook
Understanding the effects of data parallelism and sparsity on neural network training
-
Updated
Jul 27, 2021
-
Python
OpenCL Data Parallel Primitives
OpenCL powered Merklization using BLAKE3
Improve this page
Add a description, image, and links to the
data-parallelism
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
data-parallelism
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.