Anime Drawings Dataset (2024)


Design2Cloth: 3D Cloth Generation from 2D Masks

Apr 03, 2024

Jiali Zheng, Rolandos Alexandros Potamias, Stefanos Zafeiriou

In recent years, there has been a significant shift in the field of digital avatar research, towards modeling, animating and reconstructing clothed human representations, as a key step towards creating realistic avatars. However, current 3D cloth generation methods are garment specific or trained completely on synthetic data, hence lacking fine details and realism. In this work, we make a step towards automatic realistic garment design and propose Design2Cloth, a high fidelity 3D generative model trained on a real world dataset from more than 2000 subject scans. To provide vital contribution to the fashion industry, we developed a user-friendly adversarial model capable of generating diverse and detailed clothes simply by drawing a 2D cloth mask. Under a series of both qualitative and quantitative experiments, we showcase that Design2Cloth outperforms current state-of-the-art cloth generative models by a large margin. In addition to the generative properties of our network, we showcase that the proposed method can be used to achieve high quality reconstructions from single in-the-wild images and 3D scans. Dataset, code and pre-trained model will become publicly available.

ViaAnime Drawings Dataset (2)

Access Paper or Ask Questions

APISR: Anime Production Inspired Real-World Anime Super-Resolution

Mar 03, 2024

Boyang Wang, Fengyu Yang, Xihang Yu, Chao Zhang, Hanbin Zhao

Anime Drawings Dataset (4)

Anime Drawings Dataset (5)

Anime Drawings Dataset (6)

Anime Drawings Dataset (7)

While real-world anime super-resolution (SR) has gained increasing attention in the SR community, existing methods still adopt techniques from the photorealistic domain. In this paper, we analyze the anime production workflow and rethink how to use characteristics of it for the sake of the real-world anime SR. First, we argue that video networks and datasets are not necessary for anime SR due to the repetition use of hand-drawing frames. Instead, we propose an anime image collection pipeline by choosing the least compressed and the most informative frames from the video sources. Based on this pipeline, we introduce the Anime Production-oriented Image (API) dataset. In addition, we identify two anime-specific challenges of distorted and faint hand-drawn lines and unwanted color artifacts. We address the first issue by introducing a prediction-oriented compression module in the image degradation model and a pseudo-ground truth preparation with enhanced hand-drawn lines. In addition, we introduce the balanced twin perceptual loss combining both anime and photorealistic high-level features to mitigate unwanted color artifacts and increase visual clarity. We evaluate our method through extensive experiments on the public benchmark, showing our method outperforms state-of-the-art approaches by a large margin.

ViaAnime Drawings Dataset (8)

Access Paper or Ask Questions

UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons

Sep 13, 2023

Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, changpeng yang(+1 more)

The automatic co-speech gesture generation draws much attention in computer animation. Previous works designed network structures on individual datasets, which resulted in a lack of data volume and generalizability across different motion capture standards. In addition, it is a challenging task due to the weak correlation between speech and gestures. To address these problems, we present UnifiedGesture, a novel diffusion model-based speech-driven gesture synthesis approach, trained on multiple gesture datasets with different skeletons. Specifically, we first present a retargeting network to learn latent homeomorphic graphs for different motion capture standards, unifying the representations of various gestures while extending the dataset. We then capture the correlation between speech and gestures based on a diffusion model architecture using cross-local attention and self-attention to generate better speech-matched and realistic gestures. To further align speech and gesture and increase diversity, we incorporate reinforcement learning on the discrete gesture units with a learned reward function. Extensive experiments show that UnifiedGesture outperforms recent approaches on speech-driven gesture generation in terms of CCA, FGD, and human-likeness. All code, pre-trained models, databases, and demos are available to the public at https://github.com/YoungSeng/UnifiedGesture.

* 16 pages, 11 figures, ACM MM 2023

ViaAnime Drawings Dataset (10)

Access Paper or Ask Questions

Deep Geometrized Cartoon Line Inbetweening

Sep 28, 2023

Li Siyao, Tianpei Gu, Weiye Xiao, Henghui Ding, Ziwei Liu, Chen Change Loy

Anime Drawings Dataset (12)

Anime Drawings Dataset (13)

Anime Drawings Dataset (14)

Anime Drawings Dataset (15)

We aim to address a significant but understudied problem in the anime industry, namely the inbetweening of cartoon line drawings. Inbetweening involves generating intermediate frames between two black-and-white line drawings and is a time-consuming and expensive process that can benefit from automation. However, existing frame interpolation methods that rely on matching and warping whole raster images are unsuitable for line inbetweening and often produce blurring artifacts that damage the intricate line structures. To preserve the precision and detail of the line drawings, we propose a new approach, AnimeInbet, which geometrizes raster line drawings into graphs of endpoints and reframes the inbetweening task as a graph fusion problem with vertex repositioning. Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening. This is made possible via our novel modules, i.e., vertex geometric embedding, a vertex correspondence Transformer, an effective mechanism for vertex repositioning and a visibility predictor. To train our method, we introduce MixamoLine240, a new dataset of line drawings with ground truth vectorization and matching labels. Our experiments demonstrate that AnimeInbet synthesizes high-quality, clean, and complete intermediate line drawings, outperforming existing methods quantitatively and qualitatively, especially in cases with large motions. Data and code are available at https://github.com/lisiyao21/AnimeInbet.

* ICCV 2023

ViaAnime Drawings Dataset (16)

Access Paper or Ask Questions

Bridging the Gap: Fine-to-Coarse Sketch Interpolation Network for High-Quality Animation Sketch Inbetweening

Aug 25, 2023

Jiaming Shen, Kun Hu, Wei Bao, Chang Wen Chen, Zhiyong Wang

The 2D animation workflow is typically initiated with the creation of keyframes using sketch-based drawing. Subsequent inbetweens (i.e., intermediate sketch frames) are crafted through manual interpolation for smooth animations, which is a labor-intensive process. Thus, the prospect of automatic animation sketch interpolation has become highly appealing. However, existing video interpolation methods are generally hindered by two key issues for sketch inbetweening: 1) limited texture and colour details in sketches, and 2) exaggerated alterations between two sketch keyframes. To overcome these issues, we propose a novel deep learning method, namely Fine-to-Coarse Sketch Interpolation Network (FC-SIN). This approach incorporates multi-level guidance that formulates region-level correspondence, sketch-level correspondence and pixel-level dynamics. A multi-stream U-Transformer is then devised to characterize sketch inbewteening patterns using these multi-level guides through the integration of both self-attention and cross-attention mechanisms. Additionally, to facilitate future research on animation sketch inbetweening, we constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles. Comprehensive experiments on this dataset convincingly show that our proposed FC-SIN surpasses the state-of-the-art interpolation methods. Our code and dataset will be publicly available.

* 7pages,6figures

ViaAnime Drawings Dataset (18)

Access Paper or Ask Questions

AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models

Mar 20, 2023

Yu Cao, Xiangqiao Meng, P. Y. Mok, Xueting Liu, Tong-Yee Lee, Ping Li

Anime Drawings Dataset (20)

Anime Drawings Dataset (21)

Anime Drawings Dataset (22)

Anime Drawings Dataset (23)

It is a time-consuming and tedious work for manually colorizing anime line drawing images, which is an essential stage in cartoon animation creation pipeline. Reference-based line drawing colorization is a challenging task that relies on the precise cross-domain long-range dependency modelling between the line drawing and reference image. Existing learning methods still utilize generative adversarial networks (GANs) as one key module of their model architecture. In this paper, we propose a novel method called AnimeDiffusion using diffusion models that performs anime face line drawing colorization automatically. To the best of our knowledge, this is the first diffusion model tailored for anime content creation. In order to solve the huge training consumption problem of diffusion models, we design a hybrid training strategy, first pre-training a diffusion model with classifier-free guidance and then fine-tuning it with image reconstruction guidance. We find that with a few iterations of fine-tuning, the model shows wonderful colorization performance, as illustrated in Fig. 1. For training AnimeDiffusion, we conduct an anime face line drawing colorization benchmark dataset, which contains 31696 training data and 579 testing data. We hope this dataset can fill the gap of no available high resolution anime face dataset for colorization method evaluation. Through multiple quantitative metrics evaluated on our dataset and a user study, we demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for anime face line drawing colorization. We also collaborate with professional artists to test and apply our AnimeDiffusion for their creation work. We release our code on https://github.com/xq-meng/AnimeDiffusion.

ViaAnime Drawings Dataset (24)

Access Paper or Ask Questions

Elliptically-Contoured Tensor-variate Distributions with Application to Improved Image Learning

Nov 13, 2022

Carlos Llosa-Vite, Ranjan Maitra

Anime Drawings Dataset (26)

Anime Drawings Dataset (27)

Anime Drawings Dataset (28)

Anime Drawings Dataset (29)

Statistical analysis of tensor-valued data has largely used the tensor-variate normal (TVN) distribution that may be inadequate when data comes from distributions with heavier or lighter tails. We study a general family of elliptically contoured (EC) tensor-variate distributions and derive its characterizations, moments, marginal and conditional distributions, and the EC Wishart distribution. We describe procedures for maximum likelihood estimation from data that are (1) uncorrelated draws from an EC distribution, (2) from a scale mixture of the TVN distribution, and (3) from an underlying but unknown EC distribution, where we extend Tyler's robust estimator. A detailed simulation study highlights the benefits of choosing an EC distribution over the TVN for heavier-tailed data. We develop tensor-variate classification rules using discriminant analysis and EC errors and show that they better predict cats and dogs from images in the Animal Faces-HQ dataset than the TVN-based rules. A novel tensor-on-tensor regression and tensor-variate analysis of variance (TANOVA) framework under EC errors is also demonstrated to better characterize gender, age and ethnic origin than the usual TVN-based TANOVA in the celebrated Labeled Faces of the Wild dataset.

* 21 pages, 5 figures, 1 table

ViaAnime Drawings Dataset (30)

Access Paper or Ask Questions

Collaborative Neural Rendering using Anime Character Sheets

Jul 12, 2022

Zuzeng Lin, Ailin Huang, Zhewei Huang, Chen Hu, Shuchang Zhou

Anime Drawings Dataset (32)

Anime Drawings Dataset (33)

Anime Drawings Dataset (34)

Anime Drawings Dataset (35)

Drawing images of characters at desired poses is an essential but laborious task in anime production. In this paper, we present the Collaborative Neural Rendering~(CoNR) method to create new images from a few arbitrarily posed reference images available in character sheets. In general, the high diversity of body shapes of anime characters defies the employment of universal body models for real-world humans, like SMPL. To overcome this difficulty, CoNR uses a compact and easy-to-obtain landmark encoding to avoid creating a unified UV mapping in the pipeline. In addition, CoNR's performance can be significantly increased when having multiple reference images by using feature space cross-view dense correspondence and warping in a specially designed neural network construct. Moreover, we collect a character sheet dataset containing over 700,000 hand-drawn and synthesized images of diverse poses to facilitate research in this area.

* The first three authors contribute equally. Preprint, under revision

ViaAnime Drawings Dataset (36)

Access Paper or Ask Questions

Improving the Perceptual Quality of 2D Animation Interpolation

Nov 24, 2021

Shuhong Chen, Matthias Zwicker

Anime Drawings Dataset (38)

Anime Drawings Dataset (39)

Anime Drawings Dataset (40)

Anime Drawings Dataset (41)

Traditional 2D animation is labor-intensive, often requiring animators to manually draw twelve illustrations per second of movement. While automatic frame interpolation may ease this burden, the artistic effects inherent to 2D animation make video synthesis particularly challenging compared to in the photorealistic domain. Lower framerates result in larger displacements and occlusions, discrete perceptual elements (e.g. lines and solid-color regions) pose difficulties for texture-oriented convolutional networks, and exaggerated nonlinear movements hinder training data collection. Previous work tried addressing these issues, but used unscalable methods and focused on pixel-perfect performance. In contrast, we build a scalable system more appropriately centered on perceptual quality for this artistic domain. Firstly, we propose a lightweight architecture with a simple yet effective occlusion-inpainting technique to improve convergence on perceptual metrics with fewer trainable parameters. Secondly, we design a novel auxiliary module that leverages the Euclidean distance transform to improve the preservation of key line and region structures. Thirdly, we automatically double the existing manually-collected dataset for this task by quantitatively filtering out movement nonlinearities, allowing us to improve model generalization. Finally, we establish LPIPS and chamfer distance as strongly preferable to PSNR and SSIM through a user study, validating our system's emphasis on perceptual quality in the 2D animation domain.

* under review

ViaAnime Drawings Dataset (42)

Access Paper or Ask Questions

Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing

Aug 20, 2021

Garvita Tiwari, Nikolaos Sarafianos, Tony Tung, Gerard Pons-Moll

Anime Drawings Dataset (44)

Anime Drawings Dataset (45)

Anime Drawings Dataset (46)

Anime Drawings Dataset (47)

We present Neural Generalized Implicit Functions(Neural-GIF), to animate people in clothing as a function of the body pose. Given a sequence of scans of a subject in various poses, we learn to animate the character for new poses. Existing methods have relied on template-based representations of the human body (or clothing). However such models usually have fixed and limited resolutions, require difficult data pre-processing steps and cannot be used with complex clothing. We draw inspiration from template-based methods, which factorize motion into articulation and non-rigid deformation, but generalize this concept for implicit shape learning to obtain a more flexible model. We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects, before evaluating the signed distance field. Our formulation allows the learning of complex and non-rigid deformations of clothing and soft tissue, without computing a template registration as it is common with current approaches. Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations. Moreover, the model can generalize to new poses. We evaluate our method on a variety of characters from different public datasets in diverse clothing styles and show significant improvements over baseline methods, quantitatively and qualitatively. We also extend our model to multiple shape setting. To stimulate further research, we will make the model, code and data publicly available at: https://virtualhumans.mpi-inf.mpg.de/neuralgif/

ViaAnime Drawings Dataset (48)

Access Paper or Ask Questions

Anime Drawings Dataset (2024)

References

Top Articles
Latest Posts
Article information

Author: Dan Stracke

Last Updated:

Views: 6511

Rating: 4.2 / 5 (43 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.