-
Notifications
You must be signed in to change notification settings - Fork 29.8k
Tensor parallel docs #38178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor parallel docs #38178
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks a lot for working on this!
```python | ||
supported_parallel_strategies = { | ||
"colwise": ColwiseParallel(), | ||
"rowwise": RowwiseParallel(), | ||
"colwise_rep": ColwiseParallel(output_layouts=Replicate()), | ||
"rowwise_rep": RowwiseParallel(input_layouts=Replicate()), | ||
"local_colwise": ColwiseParallel(use_dtensor=False), | ||
"local_rowwise": RowwiseParallel(use_dtensor=False), | ||
"local": IsolatedParallel(), | ||
"gather": GatherParallel(), | ||
"local_packed_rowwise": PackedRowwiseParallel(use_dtensor=False), | ||
"sequence_parallel": SequenceParallel(), | ||
"replicate": ReplicateParallel(), | ||
} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should probably be updated with the interface!
Set `tp_plan="auto"` in [`~AutoModel.from_pretrained`] to enable tensor parallelism for inference. | ||
## Tensor parallelism in-depth | ||
|
||
Our implementation of tensor parallelism is built on top of `torch.distributed` package. We heavily utilize abstractions as: `DeviceMesh` or `DTensor` to provide a simple and extensible interface to the user. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should say that it is agnostic, but the onces we implemented rely on torch distributed
Imagine `DeviceMesh` as a multi-dimensional grid of devices that communicate together. Different parallelization strategies require different types of communication patterns, therefore we can create a `DeviceMesh` with multiple submeshes: | ||
```python | ||
from torch.distributed.device_mesh import init_device_mesh | ||
|
||
# Create a 2D mesh of 4 GPUs | ||
device_mesh = init_device_mesh("cuda", (2, 2), mesh_dim_names=["dp", "tp"]) | ||
|
||
# Create a 1D mesh of 4 GPUs | ||
device_mesh = init_device_mesh("cuda", (4,), mesh_dim_names=["tp"]) | ||
``` | ||
Then, most of the `torch.distributed` defined parallelization strategies can be applied to a submesh of the `DeviceMesh`, and will automatically handle the communication patterns. | ||
|
||
```python | ||
tp_submesh = device_mesh["tp"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't use the submesh, not sure I would mention them!
## Using 🤗 transformers | ||
|
||
Transformers provides a simple interface to use for tensor parallelism. We provide multiple classes implementing different partitioning | ||
strategies and a simple entrypoint to parallelize `nn.Module` instance. You won't have to interact with this interface directly, everything is done in `PretrainedModel.from_pretrained` method for you. This section will first talk about the partitioning strategies | ||
we support, then the user interface you will be interacting with and finally it will teach you how to extend it with your own partitioning | ||
strategies. | ||
|
||
### Partitioning strategies | ||
|
||
1) `ColwiseParallel` - A simple column-wise partitioning, being able to handle both weights and biases, does exactly what we've discussed before. | ||
2) `RowwiseParallel` - Again, row-wise partitioning as dicussed before, supports weights and biases, on top of that it also supports `nn.Embedding` modules. | ||
3) `SequenceParallel` - Sequence parallel implementation, for support of `LayerNorm` and `Dropout` layers. Also supports Python implementation of `RMSNorm` (see [this](https://github.com/facebookresearch/llama/blob/main/llama/model.py#L34)) | ||
4) `PackedColwiseParallel` - A variant of column-wise partitioning, however it works on packed weights (i.e. `up_proj` and `gate_proj` being packed together). For more details, see [this comment](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/tensor_parallel.py#L79-#L108) | ||
5) `PackedRowwiseParallel` - A variant of row-wise partitioning, works on packed weights, for more details check the comment linked above. | ||
6) `GatherParallel` - A very simple class, that only makes the outputs of the module to be gathered across devices. | ||
7) `IsolatedParallel` - This is a special case, where we want to *isolate* the module from the rest of the devices (world). This is used for Experts in MoE layers, basically creating Expert parallelism of sorts. | ||
8) `ReplicateParallel` - Many `torch.distributed` APIs break if model is partially sharded, so this class is used to replicate the module across all devices. | ||
|
||
You can use any of these with their corresponding key as such: | ||
|
||
```python | ||
supported_parallel_strategies = { | ||
"colwise": ColwiseParallel(), | ||
"rowwise": RowwiseParallel(), | ||
"colwise_rep": ColwiseParallel(output_layouts=Replicate()), | ||
"rowwise_rep": RowwiseParallel(input_layouts=Replicate()), | ||
"local_colwise": ColwiseParallel(use_dtensor=False), | ||
"local_rowwise": RowwiseParallel(use_dtensor=False), | ||
"local": IsolatedParallel(), | ||
"gather": GatherParallel(), | ||
"local_packed_rowwise": PackedRowwiseParallel(use_dtensor=False), | ||
"sequence_parallel": SequenceParallel(), | ||
"replicate": ReplicateParallel(), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would start with that
This is because `DTensor` is not supported for some of the operations: such as `torch.chunk`. Therefore, sometimes we need to use the `local*` strategies, which use vanilla `torch.Tensor` and do some of the distributed logic manually. | ||
|
||
> [!TIP] | ||
> If you are using a custom partitioning strategy, and it's not working with `... is not supported` error, try using the `local*` strategies to see if they work better. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
find a proper error! (reproduce the issue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you just re order to go from transformers to then the more technical stuff please
### DeviceMesh | ||
Imagine `DeviceMesh` as a multi-dimensional grid of devices that communicate together. Different parallelization strategies require different types of communication patterns, therefore we can create a `DeviceMesh` with multiple submeshes: | ||
```python | ||
from torch.distributed.device_mesh import init_device_mesh | ||
|
||
# Create a 2D mesh of 4 GPUs | ||
device_mesh = init_device_mesh("cuda", (2, 2), mesh_dim_names=["dp", "tp"]) | ||
|
||
# Create a 1D mesh of 4 GPUs | ||
device_mesh = init_device_mesh("cuda", (4,), mesh_dim_names=["tp"]) | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again we don't. use dp device mesh, so would not write this
## Using 🤗 transformers | ||
|
||
Transformers provides a simple interface to use for tensor parallelism. We provide multiple classes implementing different partitioning | ||
strategies and a simple entrypoint to parallelize `nn.Module` instance. You won't have to interact with this interface directly, everything is done in `PretrainedModel.from_pretrained` method for you. This section will first talk about the partitioning strategies | ||
we support, then the user interface you will be interacting with, and finally it will teach you how to extend it with your own partitioning | ||
strategies. | ||
|
||
### Partitioning strategies | ||
|
||
1) `ColwiseParallel` - A simple column-wise partitioning, being able to handle both weights and biases, does exactly what we've discussed before. | ||
2) `RowwiseParallel` - Again, row-wise partitioning as dicussed before, supports weights and biases, on top of that it also supports `nn.Embedding` modules. | ||
3) `SequenceParallel` - Sequence parallel implementation, for support of `LayerNorm` and `Dropout` layers. Also supports Python implementation of `RMSNorm` (see [this](https://github.com/facebookresearch/llama/blob/main/llama/model.py#L34)) | ||
4) `PackedColwiseParallel` - A variant of column-wise partitioning, however it works on packed weights (i.e. `up_proj` and `gate_proj` being packed together). For more details, see [this comment](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/tensor_parallel.py#L79-#L108) | ||
5) `PackedRowwiseParallel` - A variant of row-wise partitioning, works on packed weights, for more details check the comment linked above. | ||
6) `GatherParallel` - A very simple class, that only makes the outputs of the module to be gathered across devices. | ||
7) `IsolatedParallel` - This is a special case, where we want to *isolate* the module from the rest of the devices (world). This is used for Experts in MoE layers, basically creating Expert parallelism of sorts. | ||
8) `ReplicateParallel` - Many `torch.distributed` APIs break if model is partially sharded, so this class is used to replicate the module across all devices. | ||
|
||
You can use any of these with their corresponding key as such: | ||
|
||
```python | ||
class ParallelInterface(MutableMapping): | ||
""" | ||
Dict-like object keeping track of allowed attention functions. You can easily add a new attention function | ||
with a call to `register()`. If a model needs to locally overwrite an existing attention function, say `sdpa`, | ||
it needs to declare a new instance of this class inside the `modeling_<model>.py`, and declare it on that instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would probably start with explaning this, top to bottom 😉
### Sharding a model | ||
|
||
We provide two ways to shard a model, first one is to use `auto` tensor parallelism plan, which will automatically shard the model based on our predefined configuration. This requires the model to have predefined tensor parallel plan in transformers. | ||
|
||
```python | ||
from transformers import AutoModelForCausalLM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be the 2nd part, after using tp in transformers
return outputs.redistribute(placements=output_layouts, device_mesh=device_mesh) | ||
``` | ||
|
||
3) Register the strategy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!@
* Feat: initial docs * Feat: update doc * Final typos/changes * Refactor: reorder top to bottom.
* Feat: initial docs * Feat: update doc * Final typos/changes * Refactor: reorder top to bottom.
Improve the existing docs for tensor parallelism, with more details etc. Relies on #37877 (needs to be edit a tiny bit when final version of that PR lands).
cc @ArthurZucker