Alpa
0.2.3.dev2

Getting Started

  • Install Alpa
  • Alpa Quickstart

Tutorials

  • Distributed Training with Both Shard and Pipeline Parallelism
  • Differences between alpa.parallelize, jax.pmap and jax.pjit
  • Serving OPT-175B, BLOOM-176B and CodeGen-16B using Alpa
  • Performance Tuning Guide
  • ICML’22 Big Model Tutorial
  • Using Alpa on Slurm
  • Frequently Asked Questions (FAQ)

Architecture

  • Design and Architecture
  • Alpa Compiler Walk-Through
  • Code Structure of the Intra-op Solver

Benchmark

  • Performance Benchmark

Publications

  • Publications

Developer Guide

  • Developer Guide
Alpa
  • »
  • Alpa Documentation
  • Edit on GitHub

Alpa Documentation

Star Fork

Alpa is a system for training and serving large-scale neural networks.

Getting Started

  • Install Alpa
  • Alpa Quickstart

Tutorials

  • Distributed Training with Both Shard and Pipeline Parallelism
  • Differences between alpa.parallelize, jax.pmap and jax.pjit
  • Serving OPT-175B, BLOOM-176B and CodeGen-16B using Alpa
  • Performance Tuning Guide
  • ICML’22 Big Model Tutorial
  • Using Alpa on Slurm
  • Frequently Asked Questions (FAQ)

Architecture

  • Design and Architecture
  • Alpa Compiler Walk-Through
  • Code Structure of the Intra-op Solver

Benchmark

  • Performance Benchmark

Publications

  • Publications

Developer Guide

  • Developer Guide
Next

© Copyright 2022, Alpa Developers.

Built with Sphinx using a theme provided by Read the Docs.