{"id":1463,"date":"2022-03-10T15:36:48","date_gmt":"2022-03-10T14:36:48","guid":{"rendered":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1463"},"modified":"2022-03-10T15:44:29","modified_gmt":"2022-03-10T14:44:29","slug":"iapr-tc10-newsletter-150-march-2022","status":"publish","type":"post","link":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1463","title":{"rendered":"[IAPR-TC10] Newsletter 150 &#8211; March 2022"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png\" alt=\"\" class=\"wp-image-312\" width=\"232\" height=\"129\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-300x167.png 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-768x429.png 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3.png 1025w\" sizes=\"(max-width: 232px) 100vw, 232px\" \/><\/figure><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right\" style=\"grid-template-columns:auto 28%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"640\" height=\"853\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2022\/03\/hello-i-m-nik-sIWzYAjULfA-unsplash.jpg\" alt=\"\" class=\"wp-image-1469 size-full\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2022\/03\/hello-i-m-nik-sIWzYAjULfA-unsplash.jpg 640w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2022\/03\/hello-i-m-nik-sIWzYAjULfA-unsplash-225x300.jpg 225w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Welcome to the March edition of the TC10 newsletter.<\/p>\n\n\n\n<p>In this issue, you will find the approaching deadlines for <strong>ICFHR-IJDAR journal track<\/strong>, <strong>DAS short papers <\/strong>and <strong>MANPU pre-call<\/strong> for paper. In addition to this, you will find two postdoctoral positions<strong> <\/strong>in France.<\/p>\n\n\n\n<p>Please take care,<\/p>\n\n\n\n<p>Christophe Rigaud<br>IAPR-TC10 Communications Officer<\/p>\n<\/div><\/div>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<p><span style=\"text-decoration: underline;\">Table of content:<\/span><br><br>1) <a href=\"#1\">Upcoming deadlines and events<\/a><br>2) <a href=\"#5\">Call for Papers: ICFHR &#8211; IJDAR Special Edition<\/a><br>3) <a href=\"#6\">Call for short paper DAS 2022<\/a><br>4) <a href=\"#7\">Pre-call for paper MANPU 2022<\/a><br>5) <a href=\"#9\">Job offers<\/a> <\/p>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p><strong>Call for contributions:<\/strong> feel free to contribute to TC10 newsletters, by sending any relevant news, event, notice, open position, dataset or link to us on  iapr.tc10[at]gmail.com<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"1\">1) Upcoming deadlines and events<\/h2>\n\n\n\n<h4>2022<\/h4>\n\n\n\n<ul><li>Deadlines:<ul><li><strong>March 14<\/strong>, <em>tutorial proposal submission<\/em> <a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a><\/li><\/ul><ul><li><strong>March 15<\/strong>, <em>journal track<\/em> <em>submission<\/em> <em>deadline<\/em> <a href=\"http:\/\/icfhr2022.org\/call-for-journal.php\">ICFHR-IJDAR<\/a><\/li><li><strong>May 1<\/strong>, <em>bid proposal deadline<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/iapr.org\/conferences\/proposals.php\" target=\"_blank\">ICPR 2026<\/a><\/li><li><strong>May 13,<\/strong>&nbsp;<em>paper submission<\/em> <em>deadline<\/em>&nbsp;MANPU 2022<\/li><li><strong>May<\/strong>&nbsp;<em>paper submission<\/em> <em>deadline<\/em>&nbsp;<a href=\"http:\/\/icfhr2022.org\/\">ICFHR 2022<\/a><\/li><\/ul><\/li><\/ul>\n\n\n\n<ul><li>Events:<ul><li><strong>May 22-25<\/strong>, <em>conference<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\/\" target=\"_blank\">DAS 2022<\/a>, La Rochelle, France<\/li><li><strong>June 1-3<\/strong>, <em>conference<\/em><a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\/\" target=\"_blank\"> <\/a><a rel=\"noreferrer noopener\" href=\"https:\/\/icprai2022.sciencesconf.org\/\" target=\"_blank\">ICPRAI 2022<\/a>, Paris, France<\/li><li><strong>August 21<\/strong>, <em>workshop<\/em> MANPU 2022, Montr\u00e9al, Qu\u00e9bec (QC), Canada<\/li><li><strong>August 21-25<\/strong>, <em>conference<\/em> <a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a>, Montr\u00e9al, Qu\u00e9bec (QC), Canada<\/li><li><strong>October 16-19<\/strong>, <em>conference<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/2022.ieeeicip.org\" target=\"_blank\">ICIP 2022<\/a>, Bordeaux, France<\/li><li><strong><strong>December<\/strong> <strong>2022<\/strong><\/strong>,<strong> <\/strong><em>conference<\/em> <a href=\"http:\/\/icfhr2022.org\/\">ICFHR 2022<\/a>, Hyderabad, India<\/li><\/ul><\/li><\/ul>\n\n\n\n<p><strong>2023 and later<\/strong><\/p>\n\n\n\n<ul><li>Events:<ul><li><strong>August 2023<\/strong>, <em>conference<\/em> <a href=\"https:\/\/icdar2023.org\/\">ICDAR 2023<\/a>, San Jos\u00e9, California, USA<\/li><\/ul><\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"5\">2) Call for Papers: ICFHR &#8211; IJDAR Special Edition<\/h2>\n\n\n\n<p>As part of ICFHR 2022, we introduce the ICFHR-IJDAR journal track similar to ICDAR-IJDAR journal tracks in ICDAR 2019 and 2021. This journal track would provide the benefit of rapid and timely turnaround of the scientific developments to the community offered by conferences while maintaining the rigor and discipline of the journal review. We would like to invite exceptional works from the field of handwriting recognition that extend significantly from a typical conference paper.<\/p>\n\n\n\n<h4><a><\/a> Author Guidelines<\/h4>\n\n\n\n<ul><li>Survey papers and papers introducing new datasets are welcome<\/li><li>Papers proposing tools will not be accepted. They should be submitted in conference<\/li><li>Papers should be well written, concise and clearly motivate the problem and communicate the proposed solution<\/li><li>Papers should be comprehensive, complete and self contained. They should not be understood only in association with other paper(s)<\/li><li>Submissions accepted for this special edition will receive an oral presentation in the conference<\/li><li>Manuscripts should be limited to 20 pages (two columns, double spaced and inclusive of figures), except for cases where the topic warrants additional space. Authors proposing papers longer than 20 pages should provide detailed justification to the editors in the cover letter for the submission. Editors will review requests on a case by case basis.<\/li><\/ul>\n\n\n\n<h4><a><\/a> Areas of Interest<\/h4>\n\n\n\n<p>The topics of interest are identical to the ICFHR conference, which are the following:<\/p>\n\n\n\n<ul><li>Handwriting Recognition<\/li><li>Cursive Script Recognition<\/li><li>Symbol, Equation, Sketch and Drawing Recognition<\/li><li>Handwritten Document Processing and Understanding<\/li><li>Language Models in Handwriting Recognition<\/li><li>Web-Based Applications<\/li><li>Handwritten Databases and Digital Libraries<\/li><li>Information Extraction &amp; Retrieval<\/li><li>Signature Recognition and Verification<\/li><li>Recognition of Handwritten Graphical Documents<\/li><li>Document Characterization<\/li><li>Layout Analysis<\/li><li>Form Processing<\/li><li>Word Spotting<\/li><li>Bank-Check Processing<\/li><li>Historical Document Processing<\/li><li>Forensic Studies and Security Issues<\/li><li>Writer Verification and Identification<\/li><li>Metrics and Evaluation<\/li><li>Electronic Ink and Pen-Based Systems<\/li><li>Other Offline and Online Applications<\/li><li>Writing in the Air<\/li><\/ul>\n\n\n\n<h4><a><\/a> Important Dates<\/h4>\n\n\n\n<p>The timeline for the activities is given below.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Activities<\/strong><\/td><td><strong>Dates<\/strong><\/td><\/tr><tr><td>Initial submission deadline<\/td><td>March 15, 2022<\/td><\/tr><tr><td>Initial decisions<\/td><td>May 15, 2022<\/td><\/tr><tr><td>Submission of revised journal papers<\/td><td>Jun 15, 2022<\/td><\/tr><tr><td>Final decisions<\/td><td>August 31, 2022<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4><a><\/a> Guest Editors<\/h4>\n\n\n\n<p>Alicia Forn\u00e9s<br>Utkarsh Porwal <br>Faisal Shafait<\/p>\n\n\n\n<p><a><\/a> ICFHR Conference Page &#8211; <a href=\"http:\/\/icfhr2022.org\/index.php\"><u>http:\/\/icfhr2022.org\/index.php<\/u><\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"6\">3) Call for short paper DAS 2022<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">What: 15th IAPR International Workshop on Document Analysis Systems<br>Where: La Rochelle, France + online<br>URL: <a href=\"https:\/\/das2022.univ-lr.fr\/\">https:\/\/das2022.univ-lr.fr\/<\/a><\/pre>\n\n\n\n<p>DAS 2022 is the 15th international IAPR-sponsored workshop dedicated towards system-level approaches and related challenges in document analysis and recognition. Short papers provide an opportunity to report on research in progress, to present demos and novel positions on document analysis systems. Short papers (up to 4 pages in length) will undergo review and will appear in an extra booklet, not in the official DAS2022 proceedings.<\/p>\n\n\n\n<p>Submission link : <a href=\"https:\/\/easychair.org\/my\/conference?conf=das2022\">https:\/\/easychair.org\/my\/conference?conf=das2022<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>IMPORTANT DATES FOR SHORT PAPERS<\/strong><\/p>\n\n\n\n<p><strong>15 March 2022<\/strong>: short paper submission deadline<\/p>\n\n\n\n<p>01 April 2022: short paper acceptance notification<\/p>\n\n\n\n<p>22 April 2022: short paper camera ready version<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>ABOUT DAS 2022<\/strong><\/p>\n\n\n\n<p>DAS 2022 is the 15th international IAPR-sponsored workshop dedicated towards system-level approaches and related challenges in document analysis and recognition. This includes models, methods, and relevant applications satisfying real-world engineering requirements. The workshop provides an exciting platform for interactions and high-level technical exchanges between industrial and academic communities. The DAS 2022 program will include invited talks, oral and poster paper presentations, tutorials, demonstrations, and working group discussions.<\/p>\n\n\n\n<p>DAS 2022 will be held in the historical city of La Rochelle located on the French Atlantic coast. La Rochelle is famous for its old port, stunning seaside views, urban beaches, and for the second largest private aquarium in Europe. Online\/virtual participation will also be available for DAS 2022 authors and participants.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>TOPICS OF INTEREST<\/strong><\/p>\n\n\n\n<p>* Document analysis systems<\/p>\n\n\n\n<p>* Document understanding<\/p>\n\n\n\n<p>* Layout analysis<\/p>\n\n\n\n<p>* Camera-based document analysis<\/p>\n\n\n\n<p>* Document analysis for digital humanities<\/p>\n\n\n\n<p>* Document analysis for libraries and archives<\/p>\n\n\n\n<p>* Document analysis for the internet<\/p>\n\n\n\n<p>* Document analysis for mobile devices<\/p>\n\n\n\n<p>* Document authentication<\/p>\n\n\n\n<p>* Document datasets<\/p>\n\n\n\n<p>* Document image watermarking<\/p>\n\n\n\n<p>* Document retrieval<\/p>\n\n\n\n<p>* Deep learning for document analysis systems<\/p>\n\n\n\n<p>* Information extraction from document images<\/p>\n\n\n\n<p>* Graphics recognition<\/p>\n\n\n\n<p>* Table and form processing<\/p>\n\n\n\n<p>* Mathematical expression recognition<\/p>\n\n\n\n<p>* Forensic document analysis<\/p>\n\n\n\n<p>* Historical document analysis<\/p>\n\n\n\n<p>* Multilingual document analysis<\/p>\n\n\n\n<p>* Multimedia document analysis<\/p>\n\n\n\n<p>* Pen-based input and its analysis<\/p>\n\n\n\n<p>* NLP for document analysis<\/p>\n\n\n\n<p>* Human document interaction<\/p>\n\n\n\n<p>* Authoring, annotation, and presentation systems<\/p>\n\n\n\n<p>* Performance evaluation<\/p>\n\n\n\n<p>* Applications<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"7\">4) <strong>Pre &#8211; call for paper MANPU 2022<\/strong><\/h2>\n\n\n\n<p><strong>5th International Workshop on coMics ANalysis, Processing and Understanding<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Pre - Call for papers (in conjunction with ICPR 2022)\nAugust 21, 2022, Montreal, Canada<\/pre>\n\n\n\n<p>Comics is a medium constituted of images combined with text and other visual information in order to narrate a story. Nowadays, comic books are a widespread cultural expression all over the world. The market of comics continues to grow, for example, the market in Japan is about 4.25 billion USD in 2015. Moreover, from the research point of view, comics images are attractive targets because the structure of a comics page includes various elements (such as panels, speech balloons, captions, leading characters, and so on), the drawing of which depends on the style of the author and presents a large variability. Therefore comics image analysis is not a trivial problem and is still immature compared with other kinds of image analysis.<\/p>\n\n\n\n<h3>Important dates<\/h3>\n\n\n\n<p>Paper submission due: May 13, 2022<\/p>\n\n\n\n<p>Notification of acceptance: May 31, 2022<\/p>\n\n\n\n<p>Camera-ready paper due: June 6, 2022<\/p>\n\n\n\n<p>Workshop day: August 21, 2021<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3>Scope and Topics<\/h3>\n\n\n\n<p>The scope of this workshop includes, but is not limited to,<\/p>\n\n\n\n<p>&#8211; Comics Image Processing<\/p>\n\n\n\n<p>&#8211; Comics Analysis and Understanding<\/p>\n\n\n\n<p>&#8211; Comics Recognition<\/p>\n\n\n\n<p>&#8211; Comics Retrieval and Spotting<\/p>\n\n\n\n<p>&#8211; Comics Enrichment<\/p>\n\n\n\n<p>&#8211; Born Digital Comics<\/p>\n\n\n\n<p>&#8211; Reading Behavior Analysis of Comics<\/p>\n\n\n\n<p>&#8211; Comics Generation<\/p>\n\n\n\n<p>&#8211; Copy protection &#8211; Fraud detection<\/p>\n\n\n\n<p>&#8211; Physical\/Digital Comics Interfaces<\/p>\n\n\n\n<p>&#8211; Cognitive Processing and Comprehension of Comics<\/p>\n\n\n\n<p>&#8211; Linguistics Analysis of Comics<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3>Datasets<\/h3>\n\n\n\n<p>To evaluate the proposed works, participants will be able to use the following datasets that are publicly available. Researchers can request to download them at each website.<\/p>\n\n\n\n<p>&#8211; eBDtheque consists of 100 images with ground truth for panels, speech balloons, tails, text lines, leading characters: <a rel=\"noreferrer noopener\" href=\"http:\/\/ebdtheque.univ-lr.fr\/\" target=\"_blank\">http:\/\/ebdtheque.univ-lr.fr\/<\/a><\/p>\n\n\n\n<p>&#8211; Manga109 consists of over 20 thousand images of 109 volumes (21,142 images): <a rel=\"noreferrer noopener\" href=\"http:\/\/www.manga109.org\/en\/\" target=\"_blank\">http:\/\/www.manga109.org\/en\/<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3>Paper Submissions<\/h3>\n\n\n\n<p>Submission and Review<\/p>\n\n\n\n<p>All papers will have to be submitted through the EasyChair submission system on or before the submission deadline. Authors can update their papers before the submission deadline. MANPU 2022 will follow a single-blind review process.<\/p>\n\n\n\n<p>Guidelines for the authors will be publish soon (website under construction)<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>General Co-Chairs<\/strong><\/p>\n\n\n\n<p>Jean-Christophe Burie (France)<\/p>\n\n\n\n<p>Motoi Iwata (Japan)<\/p>\n\n\n\n<p>Miki Ueno (Japan)<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Program Co-Chairs<\/strong><\/p>\n\n\n\n<p>Rita Hartel (Germany)<\/p>\n\n\n\n<p>Ryosuke Yamanishi (Japan)<\/p>\n\n\n\n<p>Tien-Tsin Wong (Hong Kong)<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Advisory Board<\/strong><\/p>\n\n\n\n<p>Kiyoharu Aizawa (Japan)<\/p>\n\n\n\n<p>Koichi Kise (Japan)<\/p>\n\n\n\n<p>Jean-Marc Ogier (France)<\/p>\n\n\n\n<p>Toshihiko Yamasaki (Japan)<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"9\">5) Job offers<\/h2>\n\n\n\n<h3 id=\"toc_13\">Research Engineer\/PostDoc Position (2.5 Years) &#8211; IRISA\/INSA Rennes (France)<\/h3>\n\n\n\n<h4>Title: Combining Deep and Syntactical Models for a Self-adaptive Optical Music Recognition System applied on Historical Orchestra Scores<\/h4>\n\n\n\n<p><strong class=\"\">PDF version: <\/strong><a href=\"https:\/\/www-intuidoc.irisa.fr\/files\/2021\/10\/SujetInge_Collabscore.pdf\">https:\/\/www-intuidoc.irisa.fr\/files\/2021\/10\/SujetInge_Collabscore.pdf<\/a><\/p>\n\n\n\n<p><strong class=\"\">Important Dates<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>April 1, 2022 - November 30, 2024  Contract period<\/code><\/pre>\n\n\n\n<p><strong class=\"\">IRISA &#8211; Intuidoc<\/strong><\/p>\n\n\n\n<p>IRISA is a joint research center for Informatics, including Robotics and Image and Signal Processing. 850 people, 40 teams, explore the world of digital sciences to find applications in healthcare, ecology-environment, cyber-security, transportation, multimedia, and industry. INSA Rennes is one of the 8 trustees of IRISA.<\/p>\n\n\n\n<p>The Intuidoc team (<a href=\"https:\/\/www.irisa.fr\/intuidoc\" class=\"\">https:\/\/www.irisa.fr\/intuidoc<\/a>) conducts research on the topic of document image recognition. Since many years, the team proposes a system, called DMOS-PI method, for document structure analysis of documents. This DMOS-PI method is used for document recognition, or field extraction in archive documents, handwritten contents damaged documents (musical scores, archives, newspapers, letters, electronic schema, etc.).<\/p>\n\n\n\n<p><strong class=\"\">Collabscore project<\/strong><\/p>\n\n\n\n<p>Collabscore is a project founded by ANR (French Research National Agency), led by the CNAM. The goal is to&nbsp;study ancient scores provided by the BNF (Biblioth\u00e8que National de France) and Royaumont foundation.&nbsp;Collabscore is a multidisciplinary project. The first task consists in improving OMR (Optical Music Recognition)&nbsp;results using learning techniques. The second action will focus on methods for automatic alignment of the scored&nbsp;score with other multimodal sources. The last one will set up demonstrators based on notated scores at two of the&nbsp;project partners, representative, in various ways, of institutions in charge of musical heritage collections (BnF and&nbsp;Fondation Royaumont). Intuidoc team focuses on the first task of musical score recognition.<\/p>\n\n\n\n<p><strong class=\"\">Position to be filled<\/strong><\/p>\n\n\n\n<ul><li>Position: Post-doctoral fellow \/ Research Engineer<\/li><li>Time commitment: Full-time<\/li><li>Duration of the contract: up to 32 months, starting as soon a possible<\/li><li>Supervisors: Bertrand Co\u00fcasnon, Aur\u00e9lie Lemaitre, Yann Soullard<\/li><li>Indicative salary: Up to \u20ac36 000 gross annual salary (according to experience), with social security benefits<\/li><li>Location: IRISA &#8212; Rennes, France<\/li><\/ul>\n\n\n\n<p><strong class=\"\">Missions<\/strong><\/p>\n\n\n\n<p>The post-doctoral\/engineer fellow will work on the conception of a OMR system. Based on previous works of our research team, the goal of this position is to enrich an existing system (DMOS-PI) to get a&nbsp;complete self-adaptive OMR system for historical orchestra scores. The tasks are mainly:<\/p>\n\n\n\n<ul><li class=\"\">define a grammatical description of musical notation, using the existing DMOS-PI method;<\/li><li class=\"\">generate unsupervised data for training musical symbols recognizers, using the Isolating-GAN,&nbsp;a novel unsupervised music symbol detection method based on Generative Adversarial Network (GAN);<\/li><li class=\"\">create a gradual mechanism for adapting the system to new partitions to build a self-adaptive system with few annotated data;<\/li><li class=\"\">integrate anomaly detection into the system.<\/li><\/ul>\n\n\n\n<p>Logical programming from grammars and languages is expected in this work. Machine Learning methods, especially Deep&nbsp;learning-based approaches (GAN, RCNN, SSD&#8230;), will be used to solve some of the tasks, as done in our previous works on music symbol detection.<\/p>\n\n\n\n<p><strong class=\"\">Applicant Requirements<\/strong><\/p>\n\n\n\n<ul><li class=\"\">PhD, Master degree or Engineering degree in computer science<\/li><li class=\"\">Experience in document recognition or statistical analysis.<\/li><li class=\"\">Skills in grammars and languages and\/or logical programming are nice-to-have,&nbsp;as well as knowledge of music&nbsp;notation.<\/li><li class=\"\">Knowledge in deep learning with an experience with at least one library dedicated to deep learning (Keras,&nbsp;Tensorflow, Pytorch) are expected.<\/li><\/ul>\n\n\n\n<p>Candidates should contact via email: Bertrand Co\u00fcasnon (<a class=\"\" href=\"mailto:bertrand.couasnon@irisa.fr\">bertrand.couasnon@irisa.fr<\/a>), Aur\u00e9lie Lemaitre (<a class=\"\" href=\"mailto:aurelie.lemaitre@irisa.fr\">aurelie.lemaitre@irisa.fr<\/a>) and Yann Soullard (<a class=\"\" href=\"mailto:bertrand.couasnon@irisa.fr\">yann.soullard@irisa.fr<\/a>).<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>Post-doctoral research position &#8211; L3i &#8211; La Rochelle, France<\/strong><\/h3>\n\n\n\n<p><strong>Title: Extraction of information in \u201cBande Dessin\u00e9e\u201d \/ Manga \/ Comics albums<\/strong><\/p>\n\n\n\n<p>The L3i laboratory has one open post-doc position in computer science, in the specific field of document image analysis and pattern recognition.<\/p>\n\n\n\n<p><strong>Duration<\/strong>: 24 months <br><strong>Position available from<\/strong>: October 1st, 2021 <br><strong>Salary<\/strong>: approximately 2100 \u20ac \/ month (net)<br><strong>Place<\/strong>: L3i lab, University of La Rochelle, France <br><strong>Specialty<\/strong>: Computer Science\/Image Processing\/Document Analysis\/Pattern Recognition\/Deep Learning <br><strong>Contact<\/strong>: Jean-Christophe BURIE (jcburie [at] univ-lr.fr)<\/p>\n\n\n\n<p><strong>Position Description<\/strong><\/p>\n\n\n\n<p>The L3i is a research lab of the University of La Rochelle. La Rochelle is a city in the south west of France on the  Atlantic coast and is one of the most attractive and dynamic cities in France. The L3i works since several years on document analysis and has developed a well-known expertise in \u2018Bande dessin\u00e9e\u201d, manga and comics analysis, indexing and understanding.<\/p>\n\n\n\n<p>The work done by the post-doc will be part of the <strong>SAIL<\/strong> (Sequential Art Image Laboratory) a joint laboratory involving L3i and a private company. The objective is to create innovative tools to index and interact with digital comics. The  work will be done in a team of 10 researchers and engineers.<\/p>\n\n\n\n<p>The work entrusted to the recruited person will consist in developing original approaches for extracting relevant information in comics panels in order to understand its content. The team has already developed some methods to extract panels, speech balloons, text, characters (persons), faces. However, the large variability of representation of these elements requires to propose different approaches or strategies. According to the skills and knowledge of the candidate, he\/she will be able to work to improve methods dedicated to one of these elements.<\/p>\n\n\n\n<p>Other challenges may also be considered such as :<\/p>\n\n\n\n<ul><li>Detection and understanding of the scenery (sea, countryside, city, \u2026)<\/li><li>Detection and understanding of the context of the scene (battle, discussion, people eating, \u2026)<\/li><li>Object detection and recognition (bicycle, car, table, chair, \u2026)<\/li><li>\u2026<\/li><\/ul>\n\n\n\n<p>Traditional and\/or deep learning-based strategies can be studied to achieve these objectives.<\/p>\n\n\n\n<p><strong>Qualifications<\/strong><\/p>\n\n\n\n<p>Candidates must have a completed PhD and a research experience in image processing and analysis, pattern recognition. Good knowledge and experience in deep learning are also recommended.<\/p>\n\n\n\n<p><strong>General Qualifications<\/strong><\/p>\n\n\n\n<p>\u2022 Good programming skills mastering at least one programming language like Python, Java, C\/C++ <br>\u2022 Good teamwork skills <br>\u2022 Good writing skills and proficiency in written and spoken English or French<\/p>\n\n\n\n<p><strong>Applications<\/strong><\/p>\n\n\n\n<p>Candidates should send a CV and a motivation letter to jcburie [at] univ-lr.fr.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the March edition of the TC10 newsletter. In this issue, you will find the approaching deadlines for ICFHR-IJDAR journal track, DAS short papers and MANPU pre-call for paper. [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1463"}],"collection":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1463"}],"version-history":[{"count":12,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1463\/revisions"}],"predecessor-version":[{"id":1477,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1463\/revisions\/1477"}],"wp:attachment":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1463"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1463"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1463"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}