ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.3 months ago (distributed domain, exempt) |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://github.com/wkentaro/labelme |
| Last Crawled | 2026-03-29 07:09:32 (8 days ago) |
| First Indexed | 2016-07-20 08:53:55 (9 years ago) |
| HTTP Status Code | 200 |
| Meta Title | GitHub - wkentaro/labelme: Image annotation with Python. Supports polygon, rectangle, circle, line, point, and AI-assisted annotation. · GitHub |
| Meta Description | Image annotation with Python. Supports polygon, rectangle, circle, line, point, and AI-assisted annotation. - wkentaro/labelme |
| Meta Canonical | null |
| Boilerpipe Text | Image annotation with Python.
Description
Labelme is a graphical image annotation tool inspired by
http://labelme.csail.mit.edu
.
It is written in Python and uses Qt for its graphical interface.
Looking for a simple install without Python or Qt? Get the standalone app at
labelme.io
.
VOC dataset example of instance segmentation.
Other examples (semantic segmentation, bbox detection, and classification).
Various primitives (polygon, rectangle, circle, line, and point).
Multi-language support (English, 中文, 日本語, 한국어, Deutsch, Français, and more).
Features
Image annotation for polygon, rectangle, circle, line and point (
tutorial
)
Image flag annotation for classification and cleaning (
#166
)
Video annotation (
video annotation
)
GUI customization (predefined labels / flags, auto-saving, label validation, etc) (
#144
)
Exporting VOC-format dataset for
semantic segmentation
,
instance segmentation
Exporting COCO-format dataset for
instance segmentation
AI-assisted point-to-polygon/mask annotation by SAM, EfficientSAM models
AI text-to-annotation by YOLO-world, SAM3 models
🌏 Available in 16 languages
- English · 日本語 · 한국어 · 简体中文 · 繁體中文 · Deutsch · Français · Español · Italiano · Português · Nederlands · Magyar · Tiếng Việt · Türkçe · Polski · فارسی (
LANG=ja_JP.UTF-8 labelme
)
Installation
There are 3 options to install labelme:
Option 1: Using pip
For more detail, check
"Install Labelme using Terminal"
pip install labelme
#
To install the latest version from GitHub:
#
pip install git+https://github.com/wkentaro/labelme.git
Option 2: Using standalone executable (Easiest)
If you're willing to invest in the convenience of simple installation without any dependencies (Python, Qt),
you can download the standalone executable from
"Install Labelme as App"
.
It's a one-time payment for lifetime access, and it helps us to maintain this project.
Option 3: Using a package manager in each Linux distribution
In some Linux distributions, you can install labelme via their package managers (e.g., apt, pacman). The following systems are currently available:
Usage
Run
labelme --help
for detail.
The annotations are saved as a
JSON
file.
labelme
#
just open gui
#
tutorial (single image example)
cd
examples/tutorial
labelme apc2016_obj3.jpg
#
specify image file
labelme apc2016_obj3.jpg --output annotations/
#
save annotation JSON files to a directory
labelme apc2016_obj3.jpg --with-image-data
#
include image data in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball
#
specify label list
#
semantic segmentation example
cd
examples/semantic_segmentation
labelme data_annotated/
#
Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt
#
specify label list with a file
Command Line Arguments
--output
specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
The first time you run labelme, it will create a config file at
~/.labelmerc
. Add only the settings you want to override. For all available options and their defaults, see
default_config.yaml
. If you would prefer to use a config file from another location, you can specify this file with the
--config
flag.
Without the
--nosortlabels
flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
Flags are assigned to an entire image.
Example
Labels are assigned to a single polygon.
Example
FAQ
How to convert JSON file to numpy array?
See
examples/tutorial
.
How to load label PNG file?
See
examples/tutorial
.
How to get annotations for semantic segmentation?
See
examples/semantic_segmentation
.
How to get annotations for instance segmentation?
See
examples/instance_segmentation
.
Examples
Image Classification
Bounding Box Detection
Semantic Segmentation
Instance Segmentation
Video Annotation
How to build standalone executable
LABELME_PATH=./labelme
OSAM_PATH=
$(
python -c
'
import os, osam; print(os.path.dirname(osam.__file__))
'
)
pip install
'
numpy<2.0
'
#
numpy>=2.0 causes build errors (see #1532)
pyinstaller labelme/labelme/__main__.py \
--name=Labelme \
--windowed \
--noconfirm \
--specpath=build \
--add-data=
$(
OSAM_PATH
)
/_models/yoloworld/clip/bpe_simple_vocab_16e6.txt.gz:osam/_models/yoloworld/clip \
--add-data=
$(
LABELME_PATH
)
/config/default_config.yaml:labelme/config \
--add-data=
$(
LABELME_PATH
)
/icons/
*
:labelme/icons \
--add-data=
$(
LABELME_PATH
)
/translate/
*
:translate \
--icon=
$(
LABELME_PATH
)
/icons/icon-256.png \
--onedir
Acknowledgement
This repo is the fork of
mpitid/pylabelme
. |
| Markdown | [Skip to content](https://github.com/wkentaro/labelme#start-of-content)
## Navigation Menu
Toggle navigation
[Sign in](https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fwkentaro%2Flabelme)
Appearance settings
- Platform
- AI CODE CREATION
- [GitHub CopilotWrite better code with AI](https://github.com/features/copilot)
- [GitHub SparkBuild and deploy intelligent apps](https://github.com/features/spark)
- [GitHub ModelsManage and compare prompts](https://github.com/features/models)
- [MCP RegistryNewIntegrate external tools](https://github.com/mcp)
- DEVELOPER WORKFLOWS
- [ActionsAutomate any workflow](https://github.com/features/actions)
- [CodespacesInstant dev environments](https://github.com/features/codespaces)
- [IssuesPlan and track work](https://github.com/features/issues)
- [Code ReviewManage code changes](https://github.com/features/code-review)
- APPLICATION SECURITY
- [GitHub Advanced SecurityFind and fix vulnerabilities](https://github.com/security/advanced-security)
- [Code securitySecure your code as you build](https://github.com/security/advanced-security/code-security)
- [Secret protectionStop leaks before they start](https://github.com/security/advanced-security/secret-protection)
- EXPLORE
- [Why GitHub](https://github.com/why-github)
- [Documentation](https://docs.github.com/)
- [Blog](https://github.blog/)
- [Changelog](https://github.blog/changelog)
- [Marketplace](https://github.com/marketplace)
[View all features](https://github.com/features)
- Solutions
- BY COMPANY SIZE
- [Enterprises](https://github.com/enterprise)
- [Small and medium teams](https://github.com/team)
- [Startups](https://github.com/enterprise/startups)
- [Nonprofits](https://github.com/solutions/industry/nonprofits)
- BY USE CASE
- [App Modernization](https://github.com/solutions/use-case/app-modernization)
- [DevSecOps](https://github.com/solutions/use-case/devsecops)
- [DevOps](https://github.com/solutions/use-case/devops)
- [CI/CD](https://github.com/solutions/use-case/ci-cd)
- [View all use cases](https://github.com/solutions/use-case)
- BY INDUSTRY
- [Healthcare](https://github.com/solutions/industry/healthcare)
- [Financial services](https://github.com/solutions/industry/financial-services)
- [Manufacturing](https://github.com/solutions/industry/manufacturing)
- [Government](https://github.com/solutions/industry/government)
- [View all industries](https://github.com/solutions/industry)
[View all solutions](https://github.com/solutions)
- Resources
- EXPLORE BY TOPIC
- [AI](https://github.com/resources/articles?topic=ai)
- [Software Development](https://github.com/resources/articles?topic=software-development)
- [DevOps](https://github.com/resources/articles?topic=devops)
- [Security](https://github.com/resources/articles?topic=security)
- [View all topics](https://github.com/resources/articles)
- EXPLORE BY TYPE
- [Customer stories](https://github.com/customer-stories)
- [Events & webinars](https://github.com/resources/events)
- [Ebooks & reports](https://github.com/resources/whitepapers)
- [Business insights](https://github.com/solutions/executive-insights)
- [GitHub Skills](https://skills.github.com/)
- SUPPORT & SERVICES
- [Documentation](https://docs.github.com/)
- [Customer support](https://support.github.com/)
- [Community forum](https://github.com/orgs/community/discussions)
- [Trust center](https://github.com/trust-center)
- [Partners](https://github.com/partners)
[View all resources](https://github.com/resources)
- Open Source
- COMMUNITY
- [GitHub SponsorsFund open source developers](https://github.com/sponsors)
- PROGRAMS
- [Security Lab](https://securitylab.github.com/)
- [Maintainer Community](https://maintainers.github.com/)
- [Accelerator](https://github.com/accelerator)
- [GitHub Stars](https://stars.github.com/)
- [Archive Program](https://archiveprogram.github.com/)
- REPOSITORIES
- [Topics](https://github.com/topics)
- [Trending](https://github.com/trending)
- [Collections](https://github.com/collections)
- Enterprise
- ENTERPRISE SOLUTIONS
- [Enterprise platformAI-powered developer platform](https://github.com/enterprise)
- AVAILABLE ADD-ONS
- [GitHub Advanced SecurityEnterprise-grade security features](https://github.com/security/advanced-security)
- [Copilot for BusinessEnterprise-grade AI features](https://github.com/features/copilot/copilot-business)
- [Premium SupportEnterprise-grade 24/7 support](https://github.com/premium-support)
- [Pricing](https://github.com/pricing)
Search or jump to...
# Search code, repositories, users, issues, pull requests...
[Search syntax tips](https://docs.github.com/search-github/github-code-search/understanding-github-code-search-syntax)
# Provide feedback
Cancel
Submit feedback
# Saved searches
## Use saved searches to filter your results more quickly
Cancel
Create saved search
[Sign in](https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fwkentaro%2Flabelme)
[Sign up](https://github.com/signup?ref_cta=Sign+up&ref_loc=header+logged+out&ref_page=%2F%3Cuser-name%3E%2F%3Crepo-name%3E&source=header-repo&source_repo=wkentaro%2Flabelme)
Appearance settings
Resetting focus
You signed in with another tab or window. [Reload](https://github.com/wkentaro/labelme) to refresh your session. You signed out in another tab or window. [Reload](https://github.com/wkentaro/labelme) to refresh your session. You switched accounts on another tab or window. [Reload](https://github.com/wkentaro/labelme) to refresh your session.
Dismiss alert
{{ message }}
[wkentaro](https://github.com/wkentaro) / **[labelme](https://github.com/wkentaro/labelme)** Public
- [Notifications](https://github.com/login?return_to=%2Fwkentaro%2Flabelme)
You must be signed in to change notification settings
- [Fork 3.7k](https://github.com/login?return_to=%2Fwkentaro%2Flabelme)
- [Star 15.7k](https://github.com/login?return_to=%2Fwkentaro%2Flabelme)
- [Code](https://github.com/wkentaro/labelme)
- [Issues 109](https://github.com/wkentaro/labelme/issues)
- [Pull requests 73](https://github.com/wkentaro/labelme/pulls)
- [Discussions](https://github.com/wkentaro/labelme/discussions)
- [Actions](https://github.com/wkentaro/labelme/actions)
- [Security 0](https://github.com/wkentaro/labelme/security)
- [Insights](https://github.com/wkentaro/labelme/pulse)
Additional navigation options
- [Code](https://github.com/wkentaro/labelme)
- [Issues](https://github.com/wkentaro/labelme/issues)
- [Pull requests](https://github.com/wkentaro/labelme/pulls)
- [Discussions](https://github.com/wkentaro/labelme/discussions)
- [Actions](https://github.com/wkentaro/labelme/actions)
- [Security](https://github.com/wkentaro/labelme/security)
- [Insights](https://github.com/wkentaro/labelme/pulse)
# wkentaro/labelme
main
[**27** Branches](https://github.com/wkentaro/labelme/branches)
[**220** Tags](https://github.com/wkentaro/labelme/tags)
Go to file
Code
Open more actions menu
## Folders and files
| Name | Name | Last commit message | Last commit date |
|---|---|---|---|
| Latest commit [](https://github.com/wkentaro)[wkentaro](https://github.com/wkentaro/labelme/commits?author=wkentaro) [Merge pull request](https://github.com/wkentaro/labelme/commit/2cd7d5a049785cd4b8bc2fb397bbc24da07eaccd) [\#1905](https://github.com/wkentaro/labelme/pull/1905) [from wkentaro/dependabot/uv/cryptography-46.0.6](https://github.com/wkentaro/labelme/commit/2cd7d5a049785cd4b8bc2fb397bbc24da07eaccd) Open commit details success Mar 28, 2026 [2cd7d5a](https://github.com/wkentaro/labelme/commit/2cd7d5a049785cd4b8bc2fb397bbc24da07eaccd) · Mar 28, 2026 History [2,225 Commits](https://github.com/wkentaro/labelme/commits/main/) Open commit details 2,225 Commits | | | |
## Repository files navigation
- [README](https://github.com/wkentaro/labelme)
- [GPL-3.0 license](https://github.com/wkentaro/labelme)
# [](https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon-256.png) labelme
#### Image annotation with Python.
[](https://pypi.python.org/pypi/labelme) [](https://github.com/wkentaro/labelme/actions) [](https://discord.com/invite/uAjxGcJm83)
[**Installation**](https://github.com/wkentaro/labelme#installation) \| [**Usage**](https://github.com/wkentaro/labelme#usage) \| [**Examples**](https://github.com/wkentaro/labelme#examples) \| [**labelme.io ↗**](https://labelme.io/)
[](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg)
## Description
Labelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.
It is written in Python and uses Qt for its graphical interface.
> Looking for a simple install without Python or Qt? Get the standalone app at **[labelme.io](https://labelme.io/)**.
[](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationClass/2011_000006.png) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationObject/2011_000006.png) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg)
*VOC dataset example of instance segmentation.*
[](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection/.readme/annotation.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/classification/.readme/annotation_cat.jpg)
*Other examples (semantic segmentation, bbox detection, and classification).*
[](https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif) [](https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif) [](https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif) [](https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif) [](https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png)
*Various primitives (polygon, rectangle, circle, line, and point).*
[](https://private-user-images.githubusercontent.com/4310419/559373896-53bf09db-b097-48b7-9f32-ab490da5ac53.gif?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ3Njg0NzQsIm5iZiI6MTc3NDc2ODE3NCwicGF0aCI6Ii80MzEwNDE5LzU1OTM3Mzg5Ni01M2JmMDlkYi1iMDk3LTQ4YjctOWYzMi1hYjQ5MGRhNWFjNTMuZ2lmP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI2MDMyOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNjAzMjlUMDcwOTM0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MzgwYzUzM2QzNzZhZGQzMTRiZmI0NzQ3M2U1MDQwZDkxMDg4M2Y0OGIzYmM2OTRmMDNkYmFkODA1MjkwMzc3ZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.HKFgJUKdxtaADtN4j7gpj7tyrIpUGqCM2YaQqEIA8NU)
[](https://private-user-images.githubusercontent.com/4310419/559373896-53bf09db-b097-48b7-9f32-ab490da5ac53.gif?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ3Njg0NzQsIm5iZiI6MTc3NDc2ODE3NCwicGF0aCI6Ii80MzEwNDE5LzU1OTM3Mzg5Ni01M2JmMDlkYi1iMDk3LTQ4YjctOWYzMi1hYjQ5MGRhNWFjNTMuZ2lmP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI2MDMyOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNjAzMjlUMDcwOTM0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MzgwYzUzM2QzNzZhZGQzMTRiZmI0NzQ3M2U1MDQwZDkxMDg4M2Y0OGIzYmM2OTRmMDNkYmFkODA1MjkwMzc3ZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.HKFgJUKdxtaADtN4j7gpj7tyrIpUGqCM2YaQqEIA8NU)
*Multi-language support (English, 中文, 日本語, 한국어, Deutsch, Français, and more).*
## Features
- Image annotation for polygon, rectangle, circle, line and point ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
- Image flag annotation for classification and cleaning ([\#166](https://github.com/wkentaro/labelme/pull/166))
- Video annotation ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation))
- GUI customization (predefined labels / flags, auto-saving, label validation, etc) ([\#144](https://github.com/wkentaro/labelme/pull/144))
- Exporting VOC-format dataset for [semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- Exporting COCO-format dataset for [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- AI-assisted point-to-polygon/mask annotation by SAM, EfficientSAM models
- AI text-to-annotation by YOLO-world, SAM3 models
**🌏 Available in 16 languages** - English · 日本語 · 한국어 · 简体中文 · 繁體中文 · Deutsch · Français · Español · Italiano · Português · Nederlands · Magyar · Tiếng Việt · Türkçe · Polski · فارسی (`LANG=ja_JP.UTF-8 labelme`)
## Installation
There are 3 options to install labelme:
### Option 1: Using pip
For more detail, check ["Install Labelme using Terminal"](https://www.labelme.io/docs/install-labelme-terminal)
```
pip install labelme
# To install the latest version from GitHub:
# pip install git+https://github.com/wkentaro/labelme.git
```
### Option 2: Using standalone executable (Easiest)
If you're willing to invest in the convenience of simple installation without any dependencies (Python, Qt), you can download the standalone executable from ["Install Labelme as App"](https://www.labelme.io/docs/install-labelme-app).
It's a one-time payment for lifetime access, and it helps us to maintain this project.
### Option 3: Using a package manager in each Linux distribution
In some Linux distributions, you can install labelme via their package managers (e.g., apt, pacman). The following systems are currently available:
[](https://repology.org/project/labelme/versions)
## Usage
Run `labelme --help` for detail.
The annotations are saved as a [JSON](http://www.json.org/) file.
```
labelme # just open gui
# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg --output annotations/ # save annotation JSON files to a directory
labelme apc2016_obj3.jpg --with-image-data # include image data in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
```
### Command Line Arguments
- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
- The first time you run labelme, it will create a config file at `~/.labelmerc`. Add only the settings you want to override. For all available options and their defaults, see [`default_config.yaml`](https://github.com/wkentaro/labelme/blob/main/labelme/config/default_config.yaml). If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification)
- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection)
### FAQ
- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
- **How to get annotations for semantic segmentation?** See [examples/semantic\_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation).
- **How to get annotations for instance segmentation?** See [examples/instance\_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation).
## Examples
- [Image Classification](https://github.com/wkentaro/labelme/blob/main/examples/classification)
- [Bounding Box Detection](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection)
- [Semantic Segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation)
- [Instance Segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- [Video Annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation)
## How to build standalone executable
```
LABELME_PATH=./labelme
OSAM_PATH=$(python -c 'import os, osam; print(os.path.dirname(osam.__file__))')
pip install 'numpy<2.0' # numpy>=2.0 causes build errors (see #1532)
pyinstaller labelme/labelme/__main__.py \
--name=Labelme \
--windowed \
--noconfirm \
--specpath=build \
--add-data=$(OSAM_PATH)/_models/yoloworld/clip/bpe_simple_vocab_16e6.txt.gz:osam/_models/yoloworld/clip \
--add-data=$(LABELME_PATH)/config/default_config.yaml:labelme/config \
--add-data=$(LABELME_PATH)/icons/*:labelme/icons \
--add-data=$(LABELME_PATH)/translate/*:translate \
--icon=$(LABELME_PATH)/icons/icon-256.png \
--onedir
```
## Acknowledgement
This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).
## About
Image annotation with Python. Supports polygon, rectangle, circle, line, point, and AI-assisted annotation.
[labelme.io](https://labelme.io/ "https://labelme.io")
### Topics
[python](https://github.com/topics/python "Topic: python") [computer-vision](https://github.com/topics/computer-vision "Topic: computer-vision") [deep-learning](https://github.com/topics/deep-learning "Topic: deep-learning") [image-annotation](https://github.com/topics/image-annotation "Topic: image-annotation") [video-annotation](https://github.com/topics/video-annotation "Topic: video-annotation") [annotations](https://github.com/topics/annotations "Topic: annotations") [classification](https://github.com/topics/classification "Topic: classification") [semantic-segmentation](https://github.com/topics/semantic-segmentation "Topic: semantic-segmentation") [instance-segmentation](https://github.com/topics/instance-segmentation "Topic: instance-segmentation")
### Resources
[Readme](https://github.com/wkentaro/labelme#readme-ov-file)
### License
[GPL-3.0 license](https://github.com/wkentaro/labelme#GPL-3.0-1-ov-file)
### Citation
Cite this repository
Loading
Something went wrong.
### Uh oh\!
There was an error while loading. [Please reload this page](https://github.com/wkentaro/labelme).
[Activity](https://github.com/wkentaro/labelme/activity)
### Stars
[**15\.7k** stars](https://github.com/wkentaro/labelme/stargazers)
### Watchers
[**148** watching](https://github.com/wkentaro/labelme/watchers)
### Forks
[**3\.7k** forks](https://github.com/wkentaro/labelme/forks)
[Report repository](https://github.com/contact/report-content?content_url=https%3A%2F%2Fgithub.com%2Fwkentaro%2Flabelme&report=wkentaro+%28user%29)
## [Releases 86](https://github.com/wkentaro/labelme/releases)
[v6.0.0 Latest Mar 28, 2026](https://github.com/wkentaro/labelme/releases/tag/v6.0.0)
[\+ 85 releases](https://github.com/wkentaro/labelme/releases)
## [Packages 0](https://github.com/users/wkentaro/packages?repo_name=labelme)
No packages published
## [Used by 1\.3k](https://github.com/wkentaro/labelme/network/dependents)
[        + 1,334](https://github.com/wkentaro/labelme/network/dependents)
## [Contributors](https://github.com/wkentaro/labelme/graphs/contributors)
### Uh oh\!
There was an error while loading. [Please reload this page](https://github.com/wkentaro/labelme).
## Languages
- [Python 99.5%](https://github.com/wkentaro/labelme/search?l=python)
- [Makefile 0.5%](https://github.com/wkentaro/labelme/search?l=makefile)
## Footer
© 2026 GitHub, Inc.
### Footer navigation
- [Terms](https://docs.github.com/site-policy/github-terms/github-terms-of-service)
- [Privacy](https://docs.github.com/site-policy/privacy-policies/github-privacy-statement)
- [Security](https://github.com/security)
- [Status](https://www.githubstatus.com/)
- [Community](https://github.community/)
- [Docs](https://docs.github.com/)
- [Contact](https://support.github.com/?tags=dotcom-footer)
- Manage cookies
- Do not share my personal information
You can’t perform that action at this time. |
| Readable Markdown | Image annotation with Python.
[](https://pypi.python.org/pypi/labelme) [](https://github.com/wkentaro/labelme/actions) [](https://discord.com/invite/uAjxGcJm83)
[](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg)
Description
Labelme is a graphical image annotation tool inspired by [http://labelme.csail.mit.edu](http://labelme.csail.mit.edu/).
It is written in Python and uses Qt for its graphical interface.
> Looking for a simple install without Python or Qt? Get the standalone app at **[labelme.io](https://labelme.io/)**.
[](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationClass/2011_000006.png) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationObject/2011_000006.png) [](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg)
*VOC dataset example of instance segmentation.*
[](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection/.readme/annotation.jpg) [](https://github.com/wkentaro/labelme/blob/main/examples/classification/.readme/annotation_cat.jpg)
*Other examples (semantic segmentation, bbox detection, and classification).*
[](https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif) [](https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif) [](https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png)
*Various primitives (polygon, rectangle, circle, line, and point).*
[](https://private-user-images.githubusercontent.com/4310419/559373896-53bf09db-b097-48b7-9f32-ab490da5ac53.gif?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ3Njg0NzQsIm5iZiI6MTc3NDc2ODE3NCwicGF0aCI6Ii80MzEwNDE5LzU1OTM3Mzg5Ni01M2JmMDlkYi1iMDk3LTQ4YjctOWYzMi1hYjQ5MGRhNWFjNTMuZ2lmP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI2MDMyOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNjAzMjlUMDcwOTM0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MzgwYzUzM2QzNzZhZGQzMTRiZmI0NzQ3M2U1MDQwZDkxMDg4M2Y0OGIzYmM2OTRmMDNkYmFkODA1MjkwMzc3ZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.HKFgJUKdxtaADtN4j7gpj7tyrIpUGqCM2YaQqEIA8NU)
*Multi-language support (English, 中文, 日本語, 한국어, Deutsch, Français, and more).*
Features
- Image annotation for polygon, rectangle, circle, line and point ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
- Image flag annotation for classification and cleaning ([\#166](https://github.com/wkentaro/labelme/pull/166))
- Video annotation ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation))
- GUI customization (predefined labels / flags, auto-saving, label validation, etc) ([\#144](https://github.com/wkentaro/labelme/pull/144))
- Exporting VOC-format dataset for [semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- Exporting COCO-format dataset for [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- AI-assisted point-to-polygon/mask annotation by SAM, EfficientSAM models
- AI text-to-annotation by YOLO-world, SAM3 models
**🌏 Available in 16 languages** - English · 日本語 · 한국어 · 简体中文 · 繁體中文 · Deutsch · Français · Español · Italiano · Português · Nederlands · Magyar · Tiếng Việt · Türkçe · Polski · فارسی (`LANG=ja_JP.UTF-8 labelme`)
Installation
There are 3 options to install labelme:
Option 1: Using pip
For more detail, check ["Install Labelme using Terminal"](https://www.labelme.io/docs/install-labelme-terminal)
```
pip install labelme
# To install the latest version from GitHub:
# pip install git+https://github.com/wkentaro/labelme.git
```
Option 2: Using standalone executable (Easiest)
If you're willing to invest in the convenience of simple installation without any dependencies (Python, Qt), you can download the standalone executable from ["Install Labelme as App"](https://www.labelme.io/docs/install-labelme-app).
It's a one-time payment for lifetime access, and it helps us to maintain this project.
Option 3: Using a package manager in each Linux distribution
In some Linux distributions, you can install labelme via their package managers (e.g., apt, pacman). The following systems are currently available:
[](https://repology.org/project/labelme/versions)
Usage
Run `labelme --help` for detail.
The annotations are saved as a [JSON](http://www.json.org/) file.
```
labelme # just open gui
# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg --output annotations/ # save annotation JSON files to a directory
labelme apc2016_obj3.jpg --with-image-data # include image data in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
```
Command Line Arguments
- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
- The first time you run labelme, it will create a config file at `~/.labelmerc`. Add only the settings you want to override. For all available options and their defaults, see [`default_config.yaml`](https://github.com/wkentaro/labelme/blob/main/labelme/config/default_config.yaml). If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification)
- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection)
FAQ
- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
- **How to get annotations for semantic segmentation?** See [examples/semantic\_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation).
- **How to get annotations for instance segmentation?** See [examples/instance\_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation).
Examples
- [Image Classification](https://github.com/wkentaro/labelme/blob/main/examples/classification)
- [Bounding Box Detection](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection)
- [Semantic Segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation)
- [Instance Segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation)
- [Video Annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation)
How to build standalone executable
```
LABELME_PATH=./labelme
OSAM_PATH=$(python -c 'import os, osam; print(os.path.dirname(osam.__file__))')
pip install 'numpy<2.0' # numpy>=2.0 causes build errors (see #1532)
pyinstaller labelme/labelme/__main__.py \
--name=Labelme \
--windowed \
--noconfirm \
--specpath=build \
--add-data=$(OSAM_PATH)/_models/yoloworld/clip/bpe_simple_vocab_16e6.txt.gz:osam/_models/yoloworld/clip \
--add-data=$(LABELME_PATH)/config/default_config.yaml:labelme/config \
--add-data=$(LABELME_PATH)/icons/*:labelme/icons \
--add-data=$(LABELME_PATH)/translate/*:translate \
--icon=$(LABELME_PATH)/icons/icon-256.png \
--onedir
```
Acknowledgement
This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme). |
| Shard | 174 (laksa) |
| Root Hash | 6325672905007345774 |
| Unparsed URL | com,github!/wkentaro/labelme s443 |