8000 Add links and quickstart · zyuh/segmentation_models.pytorch@25e3851 · GitHub
[go: up one dir, main page]

Skip to content

Commit 25e3851

Browse files
committed
Add links and quickstart
1 parent acfd9dd commit 25e3851

File tree

3 files changed

+40
-6
lines changed

3 files changed

+40
-6
lines changed

README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,14 @@ Congratulations! You are done! Now you can train your model with your favorite f
6969
### 📦 Models <a name="models"></a>
7070

7171
#### Architectures <a name="architectires"></a>
72-
- [Unet](https://arxiv.org/abs/1505.04597) and [Unet++](https://arxiv.org/pdf/1807.10165.pdf)
73-
- [Linknet](https://arxiv.org/abs/1707.03718)
74-
- [FPN](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)
75-
- [PSPNet](https://arxiv.org/abs/1612.01105)
76-
- [PAN](https://arxiv.org/abs/1805.10180)
77-
- [DeepLabV3](https://arxiv.org/abs/1706.05587) and [DeepLabV3+](https://arxiv.org/abs/1802.02611)
72+
- Unet [[paper](https://arxiv.org/abs/1505.04597)] [[docs](https://smp.readthedocs.io/en/latest/models.html#unet)]
73+
- Unet++ [[paper](https://arxiv.org/pdf/1807.10165.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id2)]
74+
- Linknet [[paper](https://arxiv.org/abs/1707.03718)] [[docs](https://smp.readthedocs.io/en/latest/models.html#linknet)]
75+
- FPN [[paper](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#fpn)]
76+
- PSPNet [[paper](https://arxiv.org/abs/1612.01105)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pspnet)]
77+
- PAN [[paper](https://arxiv.org/abs/1805.10180)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pan)]
78+
- DeepLabV3 [[paper](https://arxiv.org/abs/1706.05587)] [[docs](https://smp.readthedocs.io/en/latest/models.html#deeplabv3)]
79+
- DeepLabV3] [[paper](https://arxiv.org/abs/1802.02611)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id8)]
7880

7981
#### Encoders <a name="encoders"></a>
8082

docs/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ Welcome to Segmentation Models's documentation!
1111
:caption: Contents:
1212

1313
install
14+
quickstart
1415
models
1516
encoders
1617

docs/quickstart.rst

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
⏳ Quick Start
2+
==============
3+
4+
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
5+
6+
.. code-block:: python
7+
8+
import segmentation_models_pytorch as smp
9+
10+
model = smp.Unet(
11+
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
12+
encoder_weights="imagenet", # use `imagenet` pretreined weights for encoder initialization
13+
in_channels=1, # model input channels (1 for grayscale images, 3 for RGB, etc.)
14+
classes=3, # model output channels (number of classes in your dataset)
15+
)
16+
17+
- see [table](#architectires) with available model architectures
18+
- see [table](#encoders) with avaliable encoders and its corresponding weights
19+
20+
**Configure data preprocessing**
21+
22+
All encoders have pretrained weights. Preparing your data the same way as during weights pretraining may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
23+
24+
.. code-block:: python
25+
26+
from segmentation_models_pytorch.encoders import get_preprocessing_fn
27+
28+
preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
29+
30+
31+
Congratulations! You are done! Now you can train your model with your favorite framework!

0 commit comments

Comments
 (0)
0