A propos du projet

Le projet DIGIDOC se situe dans le contexte général de la numérisation de documents et plus précisément de documents précieux et anciens. Dans un contexte général où se multiplient les grands projets de valorisation du patrimoine écrit, ce projet se focalise sur l'étape d'acquisition des images de documents pour améliorer et simplifier leur utilisation ultérieure (archivage, reconnaissance de texte, extraction de document, etc). La prise en compte de l’usage des documents numérisés s’exprimera nécessairement en termes de connaissances et contraintes métier. Notre objectif est donc de conditionner la phase de production des images en considérant à la fois des connaissances a priori sur les caractéristiques des documents à numériser et des connaissances sur l’utilisation qui en sera faite. Pour cela, nous étudions la faisablilité d’intégrer au sein des scanners un module supplémentaire qui fournira en plus de l’image numérisée un ensemble de descripteurs de niveau intermédiaire calculés sur cette image. Ces descripteurs dédiés à l’acquisition, au stockage, à l’analyse et à l’indexation de documents numérisés devront permettre de quantifier l’adéquation entre la numérisation d’un document et son usage ultérieur. La définition d’un tel ensemble de descripteurs et son intégration dans un nouveau format de document numérisé est l'objectif central de ce projet. Ce nouveau format permettra de développer de nouveaux modes d’interaction avec les scanners ainsi que de nouveaux outils d’analyse de documents. Une première application visera à simplifier le paramétrage des scanners (numérisation et prétraitements) en déterminant semi-automatiquement les meilleurs paramètres, qui pourront alors évoluer d’une image à l’autre, voire même d’une partie d’image à une autre. Une seconde application sera d'évaluer la qualité de jeux d'images de documents obtenus lors d'anciennes campagnes de numérisation. Les objectifs de ce projet s’insèrent parfaitement dans la thématique de l’appel « Contenu et interaction » puisqu’ils visent à concevoir un nouveau format de description des contenus de documents numérisés afin de simplifier et d’améliorer leur archivage, leur traitement, leur comparaison et leur indexation. Ce projet rassemble des laboratoires de recherche (LaBRI Bordeaux, LI Tours, L3I La Rochelle, LITIS Rouen), des industriels (I2S Bordeaux, Arkhenum Bordeaux) et des utilisateurs finaux (BNF).

Doc on cloud

Dans le cadre de cette ANR DIGIDOC nous avons développé tout un ensemble de logiciels. Rendez-vous sur docOnCloud pour les tester (génération d'images syntéhtiques, classification, OCR, traitement d'images, ...)

Thèses soutenues

Ci-dessous les ressources relatives aux thèses soutenues à l'issue du projet DIGIDOC.
  • Thèse Kieu Van Cuong
    • Titre : Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques
    • Soutenue : 25 novembre 2014
    • Jury : NICOLE VINCENT, JOSEP LLADOS (Rapporteurs), BERTRAND COUASNON (Examinateur), JEAN-PHILIPPE MOREUX (invité), REMT MULLOT, JEAN-PHILIPPE DOMENGER, MURIEL VISANI, NICHOLAS JOURNET (encadrants)
    • Télécharger le manuscrit
    • Télécharger les slides

Publications

Abstract

In this paper, we describe a novel and simple technique for prediction of OCR results without using any OCR. The technique uses a bag of allographs to characterize textual components. Then a support vector regression (SVR) technique is used to build a predictor based on the bag of allographs. The performance of the system is evaluated on a corpus of historical documents. The proposed technique produces correct prediction of OCR results on training and test documents within the range of standard deviation of 4.18% and 6.54% respectively. The proposed system has been designed as a tool to assist selection of corpora in libraries and specify the typical performance that can be expected on the selection.


Abstract

Dans cet article, nous étudions comment des données semi-synthétiques permettent d’évaluer finement les performances d’algorithmes ou de fournir des données d’apprentissage à un système de traitement ou d’analyse d’images de documents. Les images semi-synthétiques que nous générons reproduisent fidèlement les défauts des documents anciens liés aux moyens d’impression anciens ou à la dégradation de l’encre des caractères. La première expérimentation réalisée dans cet article vise à comparer les performances de différents descripteurs texture dans l’optique d’une segmentation d’images. La seconde expérience met en évidence le fait que l’utilisation d’images semi-synthétiques permet d’enrichir quantitativement et qualitativement une base d’apprentissage utilisée par une méthode de prédiction de résultats de binarisation d’images de documents et d’améliorer les résultats de 15%.


Abstract

In this article, a complete framework for the comparative analysis of texture features is presented and evaluated for the segmentation and characterization of ancient book pages. Firstly, the content of an entire book is characterized by extracting the texture attributes of each page. The extraction of the texture features is based on a multiresolution analysis. Secondly, a clustering approach is performed in order to classify automatically the homogeneous regions of book pages. Namely, two approaches are compared based on two different statistical categories of texture features, autocorrelation and co-occurrence, in order to segment the content of ancient book pages and find homogeneous regions with little a priori knowledge. By computing several clustering and classification accuracy measures, the results of the comparison show the effectiveness of the proposed framework. Tests on different book contents (text vs graphics, manuscript vs. printed) show that those texture features are more suitable to distinguish textual regions from graphical ones, than to distinguish text fonts.


Abstract

For the segmentation of ancient digitized document images, it has been shown that texture feature analysis is a consistent choice for meeting the need to segment a page layout under significant and various degradations. In addition, it has been proven that the texture-based approaches work effectively without hypothesis on the document structure, neither on the document model nor the typographical parameters. Thus, by investigating the use of texture as a tool for automatically segmenting images, we propose to search homogeneous and similar content regions by analyzing texture features based on a multiresolution analysis. The preliminary results show the effectiveness of the texture features extracted from the autocorrelation function, the Grey Level Co-occurrence Matrix (GLCM), and the Gabor filters. In order to assess the robustness of the proposed texture-based approaches, images under numerous degradation models are generated and two image enhancement algorithms (non-local means filtering and superpixel techniques) are evaluated by several accuracy metrics. This study shows the robustness of texture feature extraction for segmentation in the case of noise and the uselessness of a denoising step.

Abstract

This paper presents an original feature vector extraction process based on the Delaunay triangulation (DT) and a zoning technique. The presented work provides an illustration of the equivalency between a zoning and the Delaunay triangulation in the context of handwritten character recognition. A novel technique that relies on the approximation of a DT and an automatic pruning calculation is introduced. We call this technique the α-approximation. To discuss our contribution, experiments are conducted on the MNIST database of handwritten digits using a support vector machine classifier for the classification task


Abstract

Handwriting recognition is an open research topic in the document analysis community. We provide two new, freely available real world datasets for an established problem. The competition consists of two independent tasks, namely segmented single Arabic digits and Arabic digit strings. Contributions will be accepted for either of the competitions. The dataset of segmented digits is a subset of the larger dataset of digit strings. It has been collected mostly amongst students of the Vienna University of Technology and consists of about 300 writers, female and male alike. To our knowledge, this database is the first one to provide files as RGB. They are delivered in original size with a resolution of 300 dpi. Contrary to other datasets, the digits are not size-normalized, but provided in the original size since in real world cases, the writers’ styles include variation in size as well as writing style. We invite all Researchers in the field of Digit and Digit String Recognition for participating to the contest which is organized in conjunction with ICDAR 2013. The evaluation and a short abstract of the submitted methods will be presented at ICDAR 2013 and published in conference proceedings. All rights of the submitted software remain by the authors. Due to the low number of participants in the Handwritten Digit String Competition, only the competition for the Single Handwritten Digits have been carried out in conjunction with ICDAR 2013. We plan to organize the Handwritten Digit String Competition in conjunction with upcomping conferences.


Abstract

In this paper, we investigate a specific area of document classification in which the documents come as a flow over the time. Moreover, the exact number of classes of document to deal with is not known from the beginning and could evolve over the time. To be able to perform classification task in such area, we need specific classifiers that are able to perform incremental learning and change their modeling over the time. More specifically, we are focusing our study on SVM approaches, known to perform well, and for which incremental (i-SVM) procedures exist. Nevertheless, most of them are only able to deal with a fixed number of classes. So we designed a new incremental learning procedure based on one-class SVMs. This one is able to improve its classification accuracy over the time, with the arrival of new labeled data, without performing any complete retraining. Moreover, when instances are coming with a previously unknown label (appearance of a new class), the training procedure is able to modify the classifier model to recognize this corresponding new kind of documents. To investigate this area, waiting for collecting documents images as a flow, we did first experiments on the Optical Recognition of Handwritten Digits Data Set. These experiments show that our incremental approach is able: to perform, at each time, as well as a static one-class classifier fully retrained using all previously seen data, to model very quickly and efficiently new incoming classes.


Abstract

The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.


Abstract

Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.


Abstract

Texture feature analysis has undergone tremendous growth in recent years. It plays an important role for the analysis of many kinds of images. More recently, the use of texture analysis techniques for historical document image segmentation has become a logical and relevant choice in the conditions of significant document image degradation and in the context of lacking information on the document structure such as the document model and the typographical parameters. However, previous work in the use of texture analysis for segmentation of digitized historical document images has been limited to separately test one of the well-known texture-based approaches such as autocorrelation function, Grey Level Co-occurrence Matrix (GLCM), Gabor filters, gradient, wavelets, etc. In this paper we raise the question of which texture-based method could be better suited for discriminating on the one hand graphical regions from textual ones and on the other hand for separating textual regions with different sizes and fonts. The objective of this paper is to compare some of the well-known texture-based approaches: autocorrelation function, GLCM, and Gabor filters, used in a segmentation of digitized historical document images. Texture features are briefly described and quantitative results are obtained on simplified historical document images. The achieved results are very encouraging.


Abstract

In the context of historical collection conservation and worldwide diffusion, this paper presents an automatic approach of historical book page layout segmentation. In this article, we propose to search the homogeneous regions from the content of historical digitized books with little a priori knowledge by extracting and analyzing texture features. The novelty of this work lies in the unsupervised clustering of the extracted texture descriptors to find homogeneous regions, i.e. graphic and textual regions, by performing the clustering approach on an entire book instead of processing each page individually. We propose firstly to characterize the content of an entire book by extracting the texture information of each page, as our goal is to compare and index the content of digitized books. The extraction of texture features, computed without any hypothesis on the document structure, is based on two non-parametric tools: the autocorrelation function and multiresolution analysis. Secondly, we perform an unsupervised clustering approach on the extracted features in order to classify automatically the homogeneous regions of book pages. The clustering results are assessed by internal and external accuracy measures. The overall results are quite satisfying. Such analysis would help to construct a computer-aided categorization tool of pages.


Abstract

The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results.


Abstract

This article presents a method for generating semisynthetic images of old documents where the pages might be torn (not flat). By using only 2D deformation models, most existing methods give non-realistic synthetic document images. Thus, we propose to use 3D approach for reproducing geometric distortions in real documents. First, our new texture coordinate generation technique extracts texture coordinates of each vertex in the document shape (mesh) resulting from 3D scanning of a real degraded document. Then, any 2D document image can be overlayed on the mesh by using an existing texture image mapping method. As a result, many complex real geometric distortions can be integrated in generated synthetic images. These images then can be used for enriching training sets or for performance evaluation. Our degradation method here is jointly used with the character degradation model we proposed previously to generate the 6000 semi-synthetic degraded images of the music score removal staff line competition of ICDAR 2013


Abstract

This paper presents an efficient parametrization method for generating synthetic noise on document images. By specifying the desired categories and amount of noise, the method is able to generate synthetic document images with most of degradations observed in real document images (ink splotches, white specks or streaks). Thanks to the ability of simulating different amount and kind of noise, it is possible to evaluate the robustness of many document image analysis methods. It also permits to generate data for algorithms that employ a learning process. The degradation model presented in [7] needs eight parameters for generating randomly noise regions. We propose here an extension of this model which aims to set automatically the eight parameters to generate precisely what a user wants (amount and category of noise). Our proposition consists of three steps. First, Nsp seed-points (i.e. centres of noise regions) are selected by an adaptive procedure. Then, these seed-points are classified into three categories of noise by using a heuristic rule. Finally, each size of noise region is set using a random process in order to generate degradations as realistic as possible.


Abstract

The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results.


Abstract

Historical documents pose challenging problems for training handwriting recognition systems. Besides the high variability of character shapes inherent to all handwriting, the image quality can also differ greatly, for instance due to faded ink, ink bleed-through, wrinkled and stained parchment. Especially when only few learning samples are available, it is difficult to incorporate this variability in the morphological character models. In this paper, we investigate the use of image degradation to generate synthetic learning samples for historical handwriting recognition. With respect to three image degradation models, we report significant improvements in accuracy for recognition with hidden Markov models on the medieval Saint Gall and Parzival data sets.


Abstract

This article proposes an approach to predict the result of binarization algorithms on a given docu- ment image according to its state of degradation. In- deed, historical documents suffer from different types of degradation which result in binarization errors. We intend to characterize the degradation of a document image by using different features based on the inten- sity, quantity and location of the degradation. These features allow us to build prediction models of bina- rization algorithms that are very accurate according to R2 values and p-values. The prediction models are used to select the best binarization algorithm for a given doc- ument image. Obviously, this image-by-image strategy improves the binarization of the entire dataset.

Abstract

This article focuses on a new method for document perceptual quality ground-truth creation. These ground truths are useful for performance evaluation of algorithms measuring the impact on human perceived quality of specific document defects such as bleed through, spots and character degradations [6]. To our knowledge, a methodology to create this kind of database, specific to document, does not exists. Every known methods propose empirical and subjective results. Moreover, the creation of these ground truths takes a very long time. In this article, we present a new methodology to create this kind of ground truth. This methodology has two main advantages : it minimize both the time spent to create such ground truth and the subjectivity with respect to traditional methods. The time and subjectivity are lowered by using a binary search insertion sort (log2(N) comparisons maximum). A user only has to select within two images the one that is the most degraded (according to a quality criteria). Moreover, the tool presented in this article is implemented using web services allowing the creation of ground truths in a collaborative way.

Authors


Abstract

Kanungo noise model is widely used to test the robustness of different binary document image analysis methods towards noise. This model only works with binary images while most document images are in grayscale. Because binarizing a document image might degrade its contents and lead to a loss of information, more and more researchers are currently focusing on segmentation-free methods (Angelika et al [2]). Thus, we propose a local noise model for grayscale images. Its main principle is to locally degrade the image in the neighbourhoods of "seed-points" selected close to the character boundary. These points define the center of "noise regions". The pixel values inside the noise region are modified by a Gaussian random distribution to make the final result more realistic. While Kanungo noise models scanning artifacts, our model simulates degradations due to the age of the document itself and printing/writing process such as ink splotches, white specks or streaks. It is very easy for users to parameterize and create a set of benchmark databases with an increasing level of noise. These databases will further be used to test the robustness of different grayscale document image analysis methods (i.e. text line segmentation, OCR, handwriting recognition).

Authors

Abstract

Recto verso registration is an important step allowing detection of missing digitized pages, or location of the bleed-through defect over a page. An efficient way to restore or evaluate the bleed-through of a digitized document consists in analyzing at the same time both the recto side and the verso side. This method requires the two images to be aligned, registered. Without particular knowledge about document, recto verso registration is complex. Indeed, the only information that we can use to register the two is the bleed-through. Recto verso registration is complex because the recto's bleed-through is a highly degraded version of verso's ink pixels. Therefore, in this particular context, usual image comparison methods are not very relevant. Nevertheless, document recto verso registration algorithms has been proposed, but these methods have impor- tant time computation costs, are noise sensitive and even fail in some cases where bleed-through is too light. The previous techniques are based on a pixel to pixel approach where the bleed-through is considered to be just a set of grey pixels. In this article, we consider the structure of the ink pixels on the verso page. The recto verso registration method presented here is based on the fact that bleed-through has the same structure that the ink on the verso side. The method registers the recto's bleed-through layout and the verso's ink layout, in two main steps, first a de-skewing algorithm is applied to both pages then, horizontal and vertical profiles are extracted and aligned with a dynamic time warping. The time complexity of our method is linear according to the image size. Moreover, experiments detailed at the end show the accuracy of our method.

Authors

Contactez-nous via Twitter : @AnrDigidoc


Copyright © ANR et structures impliquées dans le projet DIGIDOC