Machine Learning plugins for GIMP 3.
Forked from the original version to improve the user experience in several aspects:
- Added more models.
- Models are run with Python 3.10+.
- Full error text is shown in the GIMP error dailog and in debug console.
- Additional alpha channel handling in some plugins.
- Automatic installation for Windows systems.
- And other smaller improvements.
The plugins have been tested with GIMP 2.99.12 on the following systems:
- Windows 10
- Install GIMP3.
- Download this repository.
- On Windows:
- Install Python 3.10.
- Run
install.cmd
from the unpacked folder.
- You should now find the GIMP-ML plugins under Layers → GIMP-ML.
- You can download the weights here, or from the weight links below.
- Source: https://github.com/danielgatis/rembg
- Weights:
- u2net (download, source): A pre-trained model for general use cases.
- u2netp (download, source): A lightweight version of u2net model.
- u2net_human_seg (download, source): A pre-trained model for human segmentation.
- (unused) u2net_cloth_seg (download, source): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- License: MIT License
- Source: https://github.com/youyuge34/Anime-InPainting
- Weights: Google Drive | Baidu
- License: Creative Commons Attribution-NonCommercial 4.0 International
@inproceedings{nazeri2019edgeconnect,
title={EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning},
author={Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
journal={arXiv preprint},
year={2019}}
- Source:
- Demosaics: https://github.com/rekaXua/demosaic_project
- ESRGAN: https://github.com/xinntao/ESRGAN
- Weights: 4x_FatalPixels
- Licenses:
- Demosaics: GNU Affero General Public License v3.0
- ESRGAN: Apache-2.0 license
[Paper]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Applied Research Center (ARC), Tencent PCG
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
- Source: https://github.com/a-mos/High_Resolution_Image_Inpainting
- License: Creative Commons Attribution-NonCommercial 4.0 International
@article{Moskalenko_2020,
doi = {10.51130/graphicon-2020-2-4-18},
url = {https://doi.org/10.51130%2Fgraphicon-2020-2-4-18},
year = 2020,
month = {dec},
pages = {short18--1--short18--9},
author = {Andrey Moskalenko and Mikhail Erofeev and Dmitriy Vatolin},
title = {Deep Two-Stage High-Resolution Image Inpainting},
journal = {Proceedings of the 30th International Conference on Computer Graphics and Machine Vision ({GraphiCon} 2020). Part 2}}
- Source: https://github.com/twtygqyy/pytorch-SRResNet
- Torch Hub fork: https://github.com/valgur/pytorch-SRResNet
- License: MIT
- C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 105–114.
- Source: https://github.com/zeruniverse/neural-colorization
- Torch Hub fork: https://github.com/valgur/neural-colorization
- License:
- GNU GPL 3.0 for personal or research use
- Commercial use prohibited
- Model weights released under CC BY 4.0
- Based on fast-neural-style:
- https://github.com/jcjohnson/fast-neural-style
- License:
- Free for personal or research use
- For commercial use please contact the authors
- J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9906 LNCS, 2016, pp. 694–711.
- Source: https://github.com/xavysp/DexiNed
- Weights: BIPED
- License: MIT license
@misc{soria2021dexined_ext,
title={Dense Extreme Inception Network for Edge Detection},
author={Xavier Soria and Angel Sappa and Patricio Humanante and Arash Arbarinia},
year={2021},
eprint={arXiv:2112.02250},
archivePrefix={arXiv},
primaryClass={cs.CV}}
- Source: https://github.com/TAMU-VITA/DeblurGANv2
- Torch Hub fork: https://github.com/valgur/DeblurGANv2
- License: BSD 3-clause
- O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8877–8886.
- Source: https://github.com/nianticlabs/monodepth2
- Torch Hub fork: https://github.com/valgur/monodepth2
- License:
- See the license file for terms
- Copyright © Niantic, Inc. 2019. Patent Pending. All rights reserved.
- Non-commercial use only
- C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, “Digging Into Self-Supervised Monocular Depth Estimation,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 3827–3837.
- UserUnknownFactor
- Kritik Soman (kritiksoman) – original GIMP-ML implementation
MIT
Please note that additional license terms apply for each individual model. See the references list for details. Many of the models restrict usage to non-commercial or research purposes only.