Skip to content

Commit

Permalink
Merge pull request #29 from dhkim2810/master
Browse files Browse the repository at this point in the history
Update README recording to updated model package
  • Loading branch information
qiaoyu1002 authored Jun 30, 2023
2 parents d3249be + ef9d059 commit 01813ba
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 12 deletions.
18 changes: 8 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,19 +104,17 @@ cd MobileSAM; pip install -e .
The MobileSAM can be loaded in the following ways:

```
from mobile_encoder.setup_mobile_sam import setup_model
checkpoint = torch.load('../weights/mobile_sam.pt')
mobile_sam = setup_model()
mobile_sam.load_state_dict(checkpoint,strict=True)
```
from mobile_sam import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
Then the model can be easily used in just a few lines to get masks from a given prompt:
model_type = "vit_t"
sam_checkpoint = "./weights/mobile_sam.pt"
```
from segment_anything import SamPredictor
device = "cuda"
device = "cuda" if torch.cuda.is_available() else "cpu"
mobile_sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
mobile_sam.to(device=device)
mobile_sam.eval()
predictor = SamPredictor(mobile_sam)
predictor.set_image(<your_image>)
masks, _, _ = predictor.predict(<input_prompts>)
Expand All @@ -125,7 +123,7 @@ masks, _, _ = predictor.predict(<input_prompts>)
or generate masks for an entire image:

```
from segment_anything import SamAutomaticMaskGenerator
from mobile_sam import SamAutomaticMaskGenerator
mask_generator = SamAutomaticMaskGenerator(mobile_sam)
masks = mask_generator.generate(<your_image>)
Expand Down
5 changes: 3 additions & 2 deletions app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,15 @@ license: apache-2.0

# Faster Segment Anything(MobileSAM)

Official PyTorch Implementation of the <a href="https://github.com/ChaoningZhang/MobileSAM">.
Demo of official PyTorch implementation of the <a href="https://github.com/ChaoningZhang/MobileSAM">.


**MobileSAM** performs on par with the original SAM (at least visually) and keeps exactly the same pipeline as the original SAM except for a change on the image encoder.
Specifically, we replace the original heavyweight ViT-H encoder (632M) with a much smaller Tiny-ViT (5M). On a single GPU, MobileSAM runs around 12ms per image: 8ms on the image encoder and 4ms on the mask decoder.

## To run on local PC
First, mobile_sam must be installed to run on pc. [Instructions](https://github.com/dhkim2810/MobileSAM/tree/master#installation)
First, mobile_sam must be installed to run on pc. Refer to [Installation Instruction](https://github.com/dhkim2810/MobileSAM/tree/master#installation)

Then run the following

```
Expand Down

0 comments on commit 01813ba

Please sign in to comment.