{"id":1286,"date":"2021-06-09T16:10:35","date_gmt":"2021-06-09T15:10:35","guid":{"rendered":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1286"},"modified":"2021-06-19T11:12:57","modified_gmt":"2021-06-19T10:12:57","slug":"iapr-tc10-newsletter-146-june-2021","status":"publish","type":"post","link":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1286","title":{"rendered":"[IAPR-TC10] Newsletter 146 &#8211; June 2021"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png\" alt=\"\" class=\"wp-image-312\" width=\"232\" height=\"129\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-300x167.png 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-768x429.png 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3.png 1025w\" sizes=\"(max-width: 232px) 100vw, 232px\" \/><\/figure><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right\" style=\"grid-template-columns:auto 28%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"683\" height=\"1024\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/june_graphic.jpg\" alt=\"\" class=\"wp-image-1302 size-full\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/june_graphic.jpg 683w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/june_graphic-200x300.jpg 200w\" sizes=\"(max-width: 683px) 100vw, 683px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Welcome to the June 2021 edition of the TC10 newsletter.<\/p>\n\n\n\n<p>In this issue, you will find Annual ICDAR voting results, latest IJDAR issue, the invitation to the summer school on Document Analysis, ICDAR call for nomination awards (extended) and related workshop information. Also, please check <a rel=\"noreferrer noopener\" href=\"https:\/\/icdar2021.org\/news\/\" target=\"_blank\">ICDAR news<\/a> page for updates, and useful links.<\/p>\n\n\n\n<p>Take care,<\/p>\n\n\n\n<p>Christophe Rigaud<br>IAPR-TC10 Communications Officer<\/p>\n<\/div><\/div>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<p><span style=\"text-decoration: underline;\">Table of content:<\/span><br>1) <a href=\"#1\">Upcoming deadlines and events<\/a><br>2) <a href=\"#2\">Annual ICDAR voting results<\/a><br>3) <a href=\"#3\">IJDAR article alert<\/a><br>4) <a href=\"#4\">Summer School on Document Analysis (SSDA 2021)<\/a><br>5) <a href=\"#5\">ICDAR 2021 Call for Nomination Awards<\/a> (<strong>extended<\/strong>)<br>6) <a href=\"#6\">ICDAR 2021 Workshop on Human-Document Interaction (HDI)<\/a><br>7) <a href=\"#7\">ICDAR 2021 Workshop on Camera-Based Document Analysis and Recognition (CBDAR)<\/a><br>8) <a href=\"#8\">ICDAR 2021 Workshop on Graphic Recognition (GREC) (repost)<\/a><br>9) <a href=\"#9\">International conference \u201cFantastic Futures\u201d 2021 (3<sup>rd<\/sup> edition)<\/a><br>10) <a href=\"#10\">Job offer (<strong>1 new<\/strong>)<\/a><\/p>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p><strong>Call for contributions:<\/strong>  feel free to contribute to TC10 newsletters, by sending any relevant news, event, notice, open position, dataset or link to us on  iapr.tc10[at]gmail.com<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"1\">1) Upcoming deadlines and events<\/h2>\n\n\n\n<h4>2021<\/h4>\n\n\n\n<ul><li>Deadlines:<ul><li><strong>June 21<\/strong>, <em>award nomination<\/em> <em>deadline<\/em> <a href=\"https:\/\/icdar2021.org\/\">ICDAR 2021<\/a> (extended)<\/li><\/ul><ul><li><strong>June 30<\/strong>, <em>paper submission<\/em> <em>deadline<\/em> <a href=\"http:\/\/brain.korea.ac.kr\/acpr\/\">ACPR 2021<\/a> (extended)<strong><br><\/strong><\/li><\/ul><\/li><li>Events:<ul><li><strong>August 23-25<\/strong>, <em>summer school <a href=\"https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en\">SSDA 2021<\/a><\/em>, Lule\u00e5, Sweden<\/li><li><strong>September 5,<\/strong> <em>workshop<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/grec2021.univ-lr.fr\" target=\"_blank\">GREC 2021<\/a>, Lausanne, Switzerland<\/li><li><strong>September 6,<\/strong> <em>workshop<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/cbdar2021.univ-lr.fr\/\" target=\"_blank\">CBDAR 2021<\/a>, Lausanne, Switzerland<\/li><li><strong>September 6,<\/strong> <em>workshop<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/grce.labri.fr\/HDI\/\" target=\"_blank\">HDI 2021<\/a>, Lausanne, Switzerland<\/li><li><strong>September 5-10,<\/strong> <em>conference<\/em> <a href=\"https:\/\/icdar2021.org\/\">ICDAR 2021<\/a>, Lausanne, Switzerland<\/li><li><strong>November 9-12<\/strong>, conference <a href=\"http:\/\/brain.korea.ac.kr\/acpr\/\">ACPR 2021<\/a>, Jedu Island, Korea<\/li><\/ul><\/li><\/ul>\n\n\n\n<h4>2022 and later<\/h4>\n\n\n\n<ul><li>Events:<ul><li><strong>August 21-25<\/strong>, conference <a href=\"http:\/\/www.icpr2022.com\" target=\"_blank\" rel=\"noreferrer noopener\">ICPR 2022<\/a>, Montr\u00e9al, Qu\u00e9bec (QC), Canada<\/li><li><strong><strong>December<\/strong> <strong>2022<\/strong>, <\/strong><em>conference<\/em> <a href=\"http:\/\/www.icfhr2022.org\">ICFHR 2022<\/a>, Hyderabad, India<\/li><\/ul><\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"2\">2) Annual ICDAR voting results<\/h2>\n\n\n\n<p>The voting has now finished, there were 136 votes, with 101 (74%) in favor of organizing ICDAR annually. Thank you all for the participation and feel free to have a look at the <a rel=\"noreferrer noopener\" href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/Response-Details.pdf\" target=\"_blank\">response details<\/a>, ongoing discussions on <a href=\"https:\/\/www.kialo.com\/we-should-fuse-icdar--icfhr--das--grec-into-a-single-annual-conference-30656\">Kialo platform<\/a> and <a rel=\"noreferrer noopener\" href=\"https:\/\/sites.google.com\/view\/darstrategy\" target=\"_blank\">DAR strategy website<\/a>. We invite you to attend to the <a rel=\"noreferrer noopener\" href=\"https:\/\/sites.google.com\/view\/darstrategy\/fdar-2021\" target=\"_blank\">3rd Future of Document Analysis and Recognition Workshop<\/a>.<br>It will be held on Sunday 6th 2021. We will consider the results of this voting and continue the discussion of the future directions of our community.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"3\">3) IJDAR article alert<\/h2>\n\n\n\n<p><strong>Volume 24, Issue 1-2, June 2021<\/strong><br><a rel=\"noreferrer noopener\" href=\"https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-1\" target=\"_blank\">https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-1<\/a><\/p>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00372-6\">Deep learning for graphics recognition: document understanding and beyond<\/a><br>Jean-Christophe Burie, Alicia Forn\u00e9s &amp; Muhammad Muzzamil Luqman<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-020-00361-1\">Arrow R-CNN for handwritten diagram recognition<\/a> (<em>open access<\/em>)<br>Bernhard Sch\u00e4fer, Margret Keuper &amp; Heiner Stuckenschmidt<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00367-3\">Knowledge-driven description synthesis for floor plan interpretation<\/a><br>Shreya Goyal, Chiranjoy Chattopadhyay &amp; Gaurav Bhatnagar<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00364-6\">Cross-modal photo-caricature face recognition based on dynamic multi-task learning<\/a><br>Zuheng Ming, Jean-Christophe Burie &amp; Muhammad Muzzamil Luqman<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00366-4\">CNN-based segmentation of speech balloons and narrative text boxes from comic book page images<\/a><br>Arpita Dutta, Samit Biswas &amp; Amit Kumar Das<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-020-00360-2\">Translating math formula images to LaTeX sequences using deep neural networks with sequence-level training<\/a><br>Zelun Wang &amp; Jyh-Charn Liu<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00362-8\">Combination of deep neural networks and logical rules for record segmentation in historical handwritten registers using few examples<\/a><br>Sol\u00e8ne Tarride, Aur\u00e9lie Lemaitre, Bertrand Co\u00fcasnon &amp; Sophie Tardivel<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00365-5\">Offline script recognition from handwritten and printed multilingual documents: a survey<\/a><br>Deepak Sinwar, Vijaypal Singh Dhaka, Nitesh Pradhan &amp; Saumya Pandey<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00363-7\">Text recognition for Vietnamese identity card based on deep features network<\/a><br>Duc Phan Van Hoai, Huu-Thanh Duong &amp; Vinh Truong Hoang<\/li><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00368-2\">Persian handwritten digit, character and word recognition using deep learning<\/a><br>Mahdi Bonyani, Simindokht Jahangard &amp; Morteza Daneshmand<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"4\">4) Summer School on Document Analysis (4<sup>th<\/sup> edition, SSDA)<\/h2>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right is-stacked-on-mobile\" style=\"grid-template-columns:auto 61%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" src=\"https:\/\/www.ltu.se\/cms_fs\/1.82362!\/image\/image.jpg_gen\/derivatives\/landscape_fullwidth\/image.jpg\" alt=\"\"\/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-normal-font-size\"><strong>23<sup>rd<\/sup> to 27<sup>th<\/sup> of August 2021<br><strong>Lule\u00e5<\/strong> (Sweden)<br><\/strong><a href=\"https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en\">https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en<\/a><strong><a href=\"https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en\"><\/a><\/strong><\/p>\n<\/div><\/div>\n\n\n\n<p><em><strong>Digital Transformation in a Changing World<\/strong><\/em><\/p>\n\n\n\n<p>The objective of the school is to provide the participants with different aspects related to digital&nbsp;transformation of documents and beyond.&nbsp; All the latest research being carried out in the field of document understanding, document (image) analysis, natural scene text detection and recognition, historical document analysis, Corona and Virtualization, and new topics will be covered in the school.<\/p>\n\n\n\n<p>The summer school will provide a great opportunity to participants to expand their knowledge and skills by linking the theory with the real implementation. Speakers from different areas of expertise will be invited to enrich the overall impact of the school. By the end of the school participants will have advanced knowledge and application experience in:<\/p>\n\n\n\n<ul><li>Applied AI in document analysis<\/li><li>Document analysis for business applications<\/li><li>Natural scene text detection<\/li><li>Complex document understanding<\/li><li>Historical document processing<\/li><li>Current challenges in the field of document analysis<\/li><li>Contribution of major stakeholders in this field<\/li><li>Corona and Virtualization.<\/li><\/ul>\n\n\n\n<p>A very unique aspect of this summer school will be the novel hybridization concept, allowing participants from all areas (depending on the current Corona restrictions) to participate either virtually, or physically; with our long-standing experience in distance education and conferences in Lule\u00e5 and the northern region following pedagogical principals of effective teaching and learning. LTU is the perfect host for the summer school 2021 (during the demanding aspects of the currently changing world).<\/p>\n\n\n\n<p><strong>Contact: ssda2021@ltu.se<\/strong><br><strong>Website: <a href=\"https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en\">https:\/\/www.ltu.se\/research\/subjects\/Maskininlarning\/SSDA-2021?l=en<\/a><\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"5\">5) ICDAR 2021 Call for Nomination Awards (extended)<\/h2>\n\n\n\n<p><strong>Nominations Due: <s>April 30<\/s>, June 21 2021<\/strong><\/p>\n\n\n\n<p>The IAPR\/ICDAR Award Program is an established program designed to recognize individuals who have made outstanding contributions to the field of Document Analysis and Recognition in one or more of the following areas:<\/p>\n\n\n\n<ul><li>Research<\/li><li>Training of students<\/li><li>Research\/Industry interaction<\/li><li>Service to the community<\/li><\/ul>\n\n\n\n<p>Every two years, two awards categories are presented. Namely, the <em><strong>IAPR\/ICDAR Young Investigator Award<\/strong><\/em> (less than 40 years old at the time the award is made), and the <em><strong>IAPR\/ICDAR Outstanding Achievements Award<\/strong><\/em>. Each award will  consist of a token gift and a suitably inscribed certificate. The recipient of the Outstanding Achievements award will be invited to give the opening keynote speech at the ICDAR 2021 conference, introduced by the recipient from the previous conference.<\/p>\n\n\n\n<p>Nominations are invited for the ICDAR 2021 Awards in both categories. The nomination pack should include the following:<\/p>\n\n\n\n<ol><li>A nominating letter (1 page) including a brief citation to be included in the certificate.<\/li><li>Supporting letters (1 page each) from 3 active researchers from at least 3 different countries.<\/li><\/ol>\n\n\n\n<p>A nomination is usually put forward by a researcher (preferably from a different institution than the nominee) who is knowledgeable of the scientific achievements of the nominee, and who organizes letters of support.<\/p>\n\n\n\n<p>The submission procedure is strictly confidential, and self-nominations are not allowed.<\/p>\n\n\n\n<p>Please send nomination packs electronically to the TC10 and TC11 chairs:<br><strong>Jean-Christophe BURIE<\/strong><em>, TC10 Chair (jcburie[at]univ-lr.fr)<\/em><br><strong>Faisal SHAFAIT<\/strong><em>, TC11 Chair (faisal.shafait[at]seecs.edu.pk)<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"6\">6) ICDAR 2021 Workshop on Human-Document Interaction (3<sup>rd<\/sup> edition)<\/h2>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right is-stacked-on-mobile\" style=\"grid-template-columns:auto 39%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" src=\"https:\/\/grce.labri.fr\/HDI\/images\/2.jpg\" alt=\"\"\/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-normal-font-size\"><strong>September 06, 2021<br>Lausanne (Switzerland)<br><a href=\"https:\/\/grce.labri.fr\/HDI\/\">https:\/\/grce.labri.fr\/HDI\/<\/a><\/strong><\/p>\n<\/div><\/div>\n\n\n\n<p>Following the positive feedback and the large audience of the first two editions of the HDI workshop in Kyoto (Japan) 2017 and Sydney (Australia) 2019, the Third Int. Workshop on Human-Document Interaction (HDI 2021) will focus on how humans interact with written information around them, and the interfaces between users and documents. The term document is meant here in the wider possible sense, to refer to any physical object that carries static or dynamic written information. The workshop aims to create a space for debate between the Document Image Analysis and Recognition and the Human-Computer Interaction communities. We consider that initiating this dialogue is relevant and timely.<br><br><strong>Topics of Interest<\/strong><br>\u25cf&nbsp;&nbsp;&nbsp; Augmented documents<br>\u25cf&nbsp;&nbsp;&nbsp; Linking physical and digital content<br>\u25cf&nbsp;&nbsp;&nbsp; Reading behaviour analysis<br>\u25cf&nbsp;&nbsp;&nbsp; Human factors<br>\u25cf&nbsp;&nbsp;&nbsp; User experience and usability<br>\u25cf&nbsp;&nbsp;&nbsp; Wearable sensors in reading<br>\u25cf&nbsp;&nbsp;&nbsp; Active learning<br>\u25cf&nbsp;&nbsp;&nbsp; Real time document image analysis algorithms<br>\u25cf&nbsp;&nbsp;&nbsp; Content personalisation<br>\u25cf&nbsp;&nbsp;&nbsp; Anytime document analysis algorithms<br>\u25cf&nbsp;&nbsp;&nbsp; Applications (e.g. document editing, interactive translation, collaborative editing)<\/p>\n\n\n\n<p><strong>Key dates<\/strong><br><s>Submission deadline: &nbsp;&nbsp;&nbsp; May, 31<\/s><br>Notification: &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; June, 21<br>Camera Ready: &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; July, 5<br>Workshop: &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; November, 21<br><br><strong>Scope and Motivation<\/strong><br>Visual&nbsp; processing&nbsp; and&nbsp; association&nbsp; is&nbsp; an&nbsp; important&nbsp; capacity&nbsp; in&nbsp; human&nbsp;&nbsp;&nbsp; communication&nbsp;&nbsp;&nbsp; and&nbsp;&nbsp;&nbsp; intellectual&nbsp;&nbsp;&nbsp; behavior.&nbsp;&nbsp;&nbsp; Visual&nbsp;&nbsp;&nbsp; information addresses patterns of understanding as well as spatial assemblies.&nbsp;&nbsp; This&nbsp;&nbsp; also&nbsp;&nbsp; holds&nbsp;&nbsp; for&nbsp;&nbsp; office&nbsp;&nbsp; environments&nbsp;&nbsp; where&nbsp;&nbsp; specialists are seeking for best possible information assistance for improved&nbsp; processes&nbsp; and&nbsp; decision&nbsp; making.<br>In the mean time, physical and digital documents are settling to coexist in peace &#8211; connecting the two is empowering for both sides. Technology advances, such as in augmented reality, permit bringing forms of digital interaction to the physical world and vice versa, while the linking between the physical and the digital world is done in an increasingly more fluid and realistic manner.<br>A new generation of readers, conditioned by the affordances offered by electronic content and the new media types (e.g. blogs and social media posts), have developed distinct reading behaviours and new ways to interact with written content. Wearable sensors allow observing the user and introducing the user context it in the loop, offering personalised services by intelligently linking written information with the user actions. Internet of things is evolving the way everyday objects (many of them carriers of text) can influence our actions.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"7\">7) ICDAR 2021 Workshop on Camera-Based Document Analysis and Recognition (CBDAR, 9<sup>th<\/sup> edition)<\/h2>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right is-stacked-on-mobile\" style=\"grid-template-columns:auto 70%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"720\" height=\"405\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/image.png\" alt=\"\" class=\"wp-image-1292 size-full\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/image.png 720w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/image-300x169.png 300w\" sizes=\"(max-width: 720px) 100vw, 720px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-normal-font-size\"><strong>September 06, in conjunction with ICDAR 2021<br>Lausanne (Switzerland)<br><a class=\"\" href=\"https:\/\/cbdar2021.univ-lr.fr\/\">https:\/\/cbdar2021.univ-lr.fr<\/a><\/strong><\/p>\n<\/div><\/div>\n\n\n\n<p>The aim of the CBDAR workshop is to provide a natural link between document image analysis and the wider computer vision&nbsp;community by attracting cutting edge research on the topic of Camera-Based Document Analysis and Recognition.<\/p>\n\n\n\n<h4>Topics of Interest<\/h4>\n\n\n\n<p>&nbsp; \u2022 Camera based acquisition of written information \u2022 Restoration of camera captured documents (dewarping, deblurring, etc.)<br>\u2022 Camera-based document analysis and recognition<br>\u2022 Document image quality assessment \/ estimation<br>\u2022 Image degradation models for camera captured characters\/documents<br>\u2022 Text extraction from scene images<br>\u2022 Text in video<br>\u2022 Document image retrieval<br>\u2022 Device constrained techniques and algorithms<br>\u2022 Performance evaluation and metrics<br>\u2022 Mobile OCR<br>\u2022 Smartphone-based document scanning applications<br>\u2022 Feature extraction in camera capture situations<br><\/p>\n\n\n\n<h4>Important dates<\/h4>\n\n\n\n<p><s>\u2022 Submission Deadline (normal paper) : 23 May (hard deadline)<\/s><br><s>\u2022 Submission Deadline (ICDAR re-submission): 23 May (hard deadline)<\/s><br>\u2022 Acceptance Notification: 21 June 2021<br>\u2022 Camera Ready Version: 05 July 2021<br>\u2022 CBDAR 2021 Workshop: Mon, 06 September 2021&nbsp;<\/p>\n\n\n\n<h4>Submission Information &amp; Publication of Proceedings<\/h4>\n\n\n\n<p>Workshop proceedings with accepted papers will be published by Springer Lecture Notes in Computer Science (as is the case for the main conference ICDAR 2021).<\/p>\n\n\n\n<p>Find more info please visit the workshop website and follow the twitter handle of the CBDAR workshop @cbdar_workshop:&nbsp;<br><a class=\"\" href=\"https:\/\/cbdar2021.univ-lr.fr\/\">https:\/\/cbdar2021.univ-lr.fr<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"8\">8) 14th International Workshop on Graphics Recognition (GREC) (repost)<\/h2>\n\n\n\n<p><em>September 05-<s>06,<\/s> 2021<\/em><br>Lausanne, Switzerland<br><a rel=\"noreferrer noopener\" href=\"http:\/\/grec2021.univ-lr.fr\/\" target=\"_blank\">http:\/\/grec2021.univ-lr.fr\/<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"739\" height=\"354\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/03\/image-1.png\" alt=\"\" class=\"wp-image-1224\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/03\/image-1.png 739w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/03\/image-1-300x144.png 300w\" sizes=\"(max-width: 739px) 100vw, 739px\" \/><\/figure>\n\n\n\n<p>GREC workshops provide an excellent opportunity for researchers and practitioners at all levels of experience to meet colleagues and to share new ideas and knowledge about graphics recognition methods. Graphics Recognition is a sub-field of document image analysis that deals with graphical entities in engineering drawings, comics, musical scores, sketches, maps, architectural plans, mathematical notation, tables, diagrams, etc.<\/p>\n\n\n\n<p>The aim of this workshop is to maintain a very high level of interaction and creative discussions between participants, maintaining a \u201cworkshop\u201d spirit, and not being tempted by a \u201cmini-conference\u201d model.<\/p>\n\n\n\n<p>The workshop will comprise several sessions dedicated to specific topics related to graphics in document analysis and graphic recognition. For each session, there will be an invited presentation describing the state of the art and stating the open questions for the session\u2019s topic, followed by a number of short presentations that will contribute by proposing solutions to some of the questions or presenting results of the speaker\u2019s work. Each session will be concluded by a panel discussion.<\/p>\n\n\n\n<p><strong>Topics<\/strong><\/p>\n\n\n\n<ul><li>Analysis and interpretation of graphical documents, such as: Engineering drawings, floor-plans, mathematical expressions, comics, maps, music scores, patents, diagrams, charts, tables, etc.<\/li><li>Recognition of graphic elements, such as symbols, logos, stamps, drop-caps, drawings, etc.<\/li><li>Identification and localization of graphical mark-ups and annotations in written documents.<\/li><li>Raster-to-vector techniques.<\/li><li>Graphics-based information retrieval.<\/li><li>Historical graphics recognition and indexing.<\/li><li>Forensics (Writer identification\/verification) in graphic documents.<\/li><li>Description of complete systems for interpretation of graphic documents.<\/li><li>Datasets and performance evaluation in graphics recognition.<\/li><li>Authoring, editing, storing and presentation systems for graphics multimedia documents.<\/li><li>3-D models from multiple 2-D views (line drawings).<\/li><li>Digital ink processing.<\/li><li>Sketch recognition and understanding.<\/li><li>Camera-based graphics recognition.<\/li><li>Graphics recognition in born digital documents.<\/li><li>Analysis of graphics on new digital interfaces.<\/li><li>Graphics detection and recognition in real scenes.<\/li><li>Graphics analysis in medical images<\/li><\/ul>\n\n\n\n<h4><strong>Important dates<\/strong><\/h4>\n\n\n\n<ul><li>Submission deadline :<ul><li><s>Abstract submission : <strong>May 10th, 2021 (hard deadline)<\/strong><\/s><\/li><li><s>Full paper submission : <strong>May 17th, 2021<\/strong> \u2013 <strong>11:59PM Pacific Time Zone (hard deadline)<\/strong><\/s><\/li><\/ul><\/li><li>Acceptance notification: <strong>June 20th, 2021<\/strong><\/li><li>Camera ready due: <strong>June 30th, 2021<\/strong><\/li><li>Workshop : <strong>September. 05th \u2013 <s>06th<\/s>, 2021<\/strong><\/li><\/ul>\n\n\n\n<p>Accepted papers (full and short papers) will be published in a <em>Springer LNCS volume<\/em> dedicated to all ICDAR workshops. More information at: <a rel=\"noreferrer noopener\" href=\"http:\/\/grec2021.univ-lr.fr\/\" target=\"_blank\">http:\/\/grec2021.univ-lr.fr\/<\/a><\/p>\n\n\n\n<p><em>General Chair : Jean-Christophe Burie<br>Program Co-Chair : Richard Zanibbi, Motoi Iwata and Pau Riba<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"9\">9) International conference \u201cFantastic Futures\u201d 2021 (3<sup>rd<\/sup> edition, FF 2021)<\/h2>\n\n\n\n<p><strong><em>December 9-10, 2021<\/em><br>Paris, France<br><a rel=\"noreferrer noopener\" href=\"http:\/\/ai4lam.org\/\" target=\"_blank\">http:\/\/ai4lam.org<\/a><\/strong><\/p>\n\n\n\n<p>The <a href=\"http:\/\/ai4lam.org\" target=\"_blank\" rel=\"noreferrer noopener\">ai4lam<\/a> community is organizing its 3rd international conference \u201cLes futurs fantastiques\u201d, to be held at the Biblioth\u00e8que nationale de France, in Paris on December 9 &amp; 10, 2021. This conference will be in hybrid format online\/onsite.<\/p>\n\n\n\n<p>The program committee is looking for papers, tutorials or workshops proposals on the topic of artificial intelligence applied to libraries, archives and museums.<\/p>\n\n\n\n<p><strong>Please check our Call For Papers: <\/strong><a rel=\"noreferrer noopener\" href=\"https:\/\/easychair.org\/cfp\/FantasticFutures21\" target=\"_blank\">https:\/\/easychair.org\/cfp\/FantasticFutures21<\/a>&nbsp;<\/p>\n\n\n\n<p>We warmly invite you to submit your proposals, in the form of abstracts of <strong>500 words maximum<\/strong>, on the Easychair platform <strong>by June 15, 2021 <\/strong>as instructed in the CFP. Proposals are accepted in both languages of the conference, English and French.<\/p>\n\n\n\n<p>We will pay special attention to every submission in order to consider their integration in the conference program.&nbsp;<\/p>\n\n\n\n<p>For the Futurs Fantastiques 2021 program committee,<\/p>\n\n\n\n<p>Emmanuelle Bermes, PC chair<\/p>\n\n\n\n<p>FF21 CFP: <a href=\"https:\/\/easychair.org\/cfp\/FantasticFutures21\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/easychair.org\/cfp\/FantasticFutures21<\/a>&nbsp;&nbsp;<\/p>\n\n\n\n<p>More on ai4lam: <a href=\"http:\/\/ai4lam.org\/\" rel=\"noreferrer noopener\" target=\"_blank\">http:\/\/ai4lam.org<\/a><\/p>\n\n\n\n<p>Previous Conferences:<br>2019&nbsp;:&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/extranet.bnf.fr\/projects\/,DanaInfo=library.stanford.edu,SSL+fantastic-futures\" target=\"_blank\">https:\/\/library.stanford.edu\/projects\/fantastic-futures<\/a><br>2018&nbsp;:&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/extranet.bnf.fr\/artikler\/fantastic-futures\/,DanaInfo=www.nb.no,SSL+\" target=\"_blank\">https:\/\/www.nb.no\/artikler\/fantastic-futures\/<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"10\">10) Job offers (1 new)<\/h2>\n\n\n\n<h4><strong>Post-doctoral research position &#8211; L3i &#8211; La Rochelle, France<\/strong><\/h4>\n\n\n\n<p><strong>Title : Extraction of graphic elements in comics books for emotion recognition<\/strong><\/p>\n\n\n\n<p>The L3i laboratory has one open post-doc position in computer science, in the specific field of document image analysis and pattern recognition<\/p>\n\n\n\n<p><strong>Duration<\/strong>: 12 months (an extension of 12 months will be possible)<br><strong>Position available from<\/strong>: As soon as possible, 2021<br><strong>Salary<\/strong>: approximately 2100 \u20ac \/ month (net)<br><strong>Place<\/strong>: L3i lab, University of La Rochelle, France<br><strong>Specialty<\/strong>: Computer Science\/ Image Processing\/ Document Analysis\/ Pattern Recognition<br><strong>Contact<\/strong>: Jean-Christophe BURIE (jcburie [at] univ-lr.fr)<\/p>\n\n\n\n<p><strong>Position Description<\/strong><\/p>\n\n\n\n<p>The L3i is a research lab of the University of La Rochelle. La Rochelle is a city in the south west of France on the Atlantic coast and is one of the most attractive and dynamic cities in France. The L3i works since several years on document analysis and has developed a well-known expertise in \u2018Bande dessin\u00e9e\u201d, manga and comics analysis, indexing and understanding.<\/p>\n\n\n\n<p>The work done by the post-doc will take part in the context of <strong>SAiL<\/strong> (Sequential Art Image Laboratory) a joint laboratory involving L3i and a private company. The objective is to create innovative tools to index and interact with digitised comics. The work will be done in a team of 10 researchers and engineers.<\/p>\n\n\n\n<p>The work will consist in developing original approaches for extracting and recognizing graphics elements in comic panels in order to recognize emotions. Authors usually used different strategies for representing emotions such as shape of speech balloon, specific symbols, colour of the faces, etc. These elements are drawn among the other graphic elements (main characters, scenery, \u2026) making the localisation and the extraction challenging. In order to extract these specific elements, the development of original approaches will be necessary. Deep learning-based strategies can be explored to reach this goal. This work will be done in collaboration with other researchers working on text understanding.<\/p>\n\n\n\n<p><strong>Qualifications<\/strong><\/p>\n\n\n\n<p>Candidates must have a completed PhD and a research experience <strong>in image processing and analysis<\/strong>, <strong>pattern recognition<\/strong>. Some knowledge and experience in deep learning are also recommended.<\/p>\n\n\n\n<p><strong>General Qualifications<\/strong><\/p>\n\n\n\n<p>\u2022 Good programming skills mastering at least one programming language like Python, Java, C\/C++<br>\u2022 Good teamwork skills<br>\u2022 Good writing skills and proficiency in written and spoken English or French<\/p>\n\n\n\n<p><strong>Applications<\/strong><\/p>\n\n\n\n<p>Candidates should send a CV and a motivation letter to jcburie [at] univ-lr.fr.<\/p>\n\n\n\n<p><a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/2021_PostDoc_Extraction-of-graphic-elements-in-comics-books-for-emotion-recognition.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Download PDF<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the June 2021 edition of the TC10 newsletter. In this issue, you will find Annual ICDAR voting results, latest IJDAR issue, the invitation to the summer school on [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1286"}],"collection":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1286"}],"version-history":[{"count":20,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1286\/revisions"}],"predecessor-version":[{"id":1312,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1286\/revisions\/1312"}],"wp:attachment":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1286"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1286"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1286"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}