{"id":1413,"date":"2021-12-13T10:39:29","date_gmt":"2021-12-13T09:39:29","guid":{"rendered":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1413"},"modified":"2021-12-13T13:03:18","modified_gmt":"2021-12-13T12:03:18","slug":"iapr-tc10-newsletter-148-december-2021","status":"publish","type":"post","link":"https:\/\/iapr-tc10.univ-lr.fr\/?p=1413","title":{"rendered":"[IAPR-TC10] Newsletter 148 &#8211; December 2021"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png\" alt=\"\" class=\"wp-image-312\" width=\"232\" height=\"129\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-300x167.png 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-768x429.png 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3.png 1025w\" sizes=\"(max-width: 232px) 100vw, 232px\" \/><\/figure><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right\" style=\"grid-template-columns:auto 28%\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"440\" height=\"316\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/440px-Butte_PSF.png\" alt=\"\" class=\"wp-image-1422 size-full\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/440px-Butte_PSF.png 440w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/440px-Butte_PSF-300x215.png 300w\" sizes=\"(max-width: 440px) 100vw, 440px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Welcome to the December 2021 edition of the TC10 newsletter.<\/p>\n\n\n\n<p>In this issue, you will find the <strong>foreword of ICDAR 2021 General Chairs<\/strong> and registered participants can still access the&nbsp;<a href=\"https:\/\/icdar2021.aio-events.com\/\"><strong>digital memory<\/strong> <strong>of the conference<\/strong><\/a>, both for the main conference and the pre-conference events. The&nbsp;<a href=\"https:\/\/icdar2021.org\/proceedings\/\"><strong>Springer<\/strong> <strong>proceedings<\/strong><\/a>&nbsp;are also available online, with <strong>free access<\/strong> for everyone through the links on the conference website. <\/p>\n\n\n\n<p>Following the&nbsp;<a href=\"https:\/\/sites.google.com\/view\/darstrategy\">strategic plan<\/a>&nbsp;of the Document Analysis and Recognition community, ICDAR will be held annually from 2023 onward. The next two editions of the conference will be organized in <strong>San Jos\u00e9, USA<\/strong> (<a href=\"https:\/\/icdar2023.org\">ICDAR 2023<\/a>) and <strong>Athens, Greece<\/strong> (ICDAR 2024). The <strong>call for proposals for ICDAR 2025<\/strong> can be found below (deadline February 28, 2022).<\/p>\n\n\n\n<p>Note that <strong>ICPRAI<\/strong>, <strong>DAS<\/strong> and <strong>ICPR<\/strong> deadlines are approaching. You will also find <strong>ICPR 2022<\/strong> call for <strong>workshops<\/strong>, <strong>tutorials<\/strong> and <strong>challenges<\/strong> below.<\/p>\n\n\n\n<p>Please take care,<\/p>\n\n\n\n<p>Christophe Rigaud<br>IAPR-TC10 Communications Officer<\/p>\n<\/div><\/div>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<div class=\"is-layout-flow wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<p><span style=\"text-decoration: underline;\">Table of content:<\/span><br><br>1) <a href=\"#1\">Upcoming deadlines and events<\/a><br>2) <a href=\"#2\">ICDAR 2021 Proceedings: Foreword from the General Chairs<\/a><br>3) <a href=\"#3\">Call for Bids for ICDAR 2025<\/a><br>4) <a href=\"#4\">Online Course on \u201cLiterate Models for Vision\u201d<\/a><br>5) <a href=\"#5\">Call For Paper: Document Analysis System (DAS 2022)<\/a> &#8211; repost<br>6) <a href=\"#6\">Call For Papers, Workshops, Tutorials, Challenges: ICPR 2022<\/a><br>7) <a href=\"#7\">Call for Papers ICPRAI 2022<\/a><br>8) <a href=\"#8\">IJDAR article alert<\/a> &#8211; new<br>9) <a href=\"#9\">Job offer<\/a> &#8211; repost<\/p>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p><strong>Call for contributions:<\/strong>  feel free to contribute to TC10 newsletters, by sending any relevant news, event, notice, open position, dataset or link to us on  iapr.tc10[at]gmail.com<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"1\">1) Upcoming deadlines and events<\/h2>\n\n\n\n<h4>2021<\/h4>\n\n\n\n<ul><li>Deadlines:<ul><li><strong>December 15<\/strong>, <em>paper submission deadline<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/icprai2022.sciencesconf.org\/\" target=\"_blank\">ICPRAI 2022<\/a><\/li><li><strong>December 20<\/strong>, <em>challenge proposal submission  <\/em><a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a><\/li><\/ul><\/li><\/ul>\n\n\n\n<p><strong>2022 and later<\/strong><\/p>\n\n\n\n<ul><li>Deadlines:<ul><li><strong>January<\/strong> <strong>4<\/strong>, <em>paper submission deadline<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\/\" target=\"_blank\">DAS 2022<\/a><\/li><li><strong>January 10<\/strong>, <em>paper submission deadline <\/em><a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\"><a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a><\/a><\/li><li><strong>January 17,<\/strong> <em>workshop proposal submission<\/em> <a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\"><\/a><a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a><\/li><\/ul><ul><li><strong>February 28<\/strong>, <em>bid proposal deadline<\/em> ICDAR 2025<\/li><li><strong>March 14,<\/strong> <em>tutorial proposal submission<\/em> <a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a><\/li><li><strong>April<\/strong>\u00a0<em>paper submission<\/em> <em>deadline<\/em>\u00a0<a href=\"http:\/\/icfhr2022.org\/\">ICFHR 2022<\/a><\/li><\/ul><\/li><li>Events:<ul><li><strong>May 22-25<\/strong>, <em>conference<\/em> <a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\/\" target=\"_blank\">DAS 2022<\/a>, La Rochelle, France<\/li><li><strong>May 31<\/strong>, <em>conference<\/em><a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\/\" target=\"_blank\"> <a rel=\"noreferrer noopener\" href=\"https:\/\/icprai2022.sciencesconf.org\/\" target=\"_blank\">ICPRAI 2022<\/a><\/a>, Paris, France<\/li><li><strong>August 21-25<\/strong>, <em>conference<\/em> <a rel=\"noreferrer noopener\" href=\"http:\/\/www.icpr2022.com\" target=\"_blank\">ICPR 2022<\/a>, Montr\u00e9al, Qu\u00e9bec (QC), Canada<\/li><li><strong><strong>December<\/strong> <strong>2022<\/strong><\/strong>,<strong> <\/strong><em>conference<\/em> <a href=\"http:\/\/www.icfhr2022.org\">ICFHR 2022<\/a>, Hyderabad, India<\/li><li><strong>August 2023<\/strong>, <em>conference<\/em> <a href=\"https:\/\/icdar2023.org\/\">ICDAR 2023<\/a>, San Jos\u00e9, California, USA<\/li><\/ul><\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"2\">2) ICDAR 2021 Proceedings: Foreword from the General Chairs<\/h2>\n\n\n\n<p>Online version:&nbsp;<a href=\"https:\/\/icdar2021.org\/proceedings\/\">ICDAR 2021 Proceedings<\/a><\/p>\n\n\n\n<p>Our warmest welcome to the proceedings of ICDAR 2021, the 16th IAPR International Conference on Document Analysis and Recognition, which was held in Switzerland for the first time. Organizing an international conference of significant size during the COVID-19 pandemic, with the goal of welcoming at least some of the participants physically, is similar to navigating a rowboat across the ocean during a storm. Fortunately, we were able to work together with partners who have shown a tremendous amount of flexibility and patience including, in particular, our local partners, namely the Beaulieu convention center in Lausanne, EPFL, and Lausanne Tourisme, and also the international ICDAR advisory board and IAPR-TC 10\/11 leadership teams who have supported us not only with excellent advice but also financially, encouraging us to setup a hybrid format for the conference.<\/p>\n\n\n\n<p>We were not a hundred percent sure if we would see each other in Lausanne but we remained confident, together with almost half of the attendees who registered for on-site participation. We relied on the hybridization support of a motivated team from the Lule\u00e5 University of Technology during the pre-conference, and professional support from Imavox during the main conference, to ensure a smooth connection between the physical and the virtual world. Indeed, our welcome is extended especially to all our colleagues who were not able to travel to Switzerland this year. We hope you had an exciting virtual conference week, and look forward to seeing you in person again at another event of the active DAR community.<\/p>\n\n\n\n<p>With ICDAR 2021, we stepped into the shoes of a longstanding conference series, which is the premier international event for scientists and practitioners involved in document analysis and recognition, a field of growing importance in the current age of digital transitions. The conference is endorsed by IAPR-TC 10\/11 and celebrates its 30th anniversary this year with the 16th edition. The very first ICDAR conference was held in St.&nbsp;Malo, France in 1991, followed by Tsukuba, Japan (1993), Montreal, Canada (1995), Ulm, Germany (1997), Bangalore, India (1999), Seattle, USA (2001), Edinburgh, UK (2003), Seoul, South Korea (2005), Curitiba, Brazil (2007), Barcelona, Spain (2009), Beijing, China (2011), Washington DC, USA (2013), Nancy, France (2015), Kyoto, Japan (2017), and Syndney, Australia (2019).<\/p>\n\n\n\n<p>The attentive reader may have remarked that this list of cities includes several venues for the Olympic Games. This year the conference was be hosted in Lausanne, which is the headquarters of the International Olympic Committee. Not unlike the athletes who were recently competing in Tokyo, Japan, the researchers profited from a healthy spirit of competition, aimed at advancing our knowledge on how a machine can understand written communication. Indeed, following the tradition from previous years, 13 scientific competitions were held in conjunction with ICDAR 2021 including, for the first time, three so-called &#8220;long-term&#8221; competitions addressing wider challenges that may continue over the next few years.<\/p>\n\n\n\n<p>Other highlights of the conference included the keynote talks given by Masaki Nakagawa, recipient of the IAPR\/ICDAR Outstanding Achievements Award, and Micka\u00ebl Coustaty, recipient of the IAPR\/ICDAR Young Investigator Award, as well as our distinguished keynote speakers Prem Natarajan, vice president at Amazon, who gave a talk on &#8220;OCR: A Journey through Advances in the Science, Engineering, andProductization of AI\/ML&#8221;, and Beta Megyesi, professor of computational linguistics at Uppsala University, who elaborated on &#8220;Cracking Ciphers with &#8216;AI-in-the-loop&#8217;: Transcription and Decryption in a Cross-Disciplinary Field&#8221;.<\/p>\n\n\n\n<p>A total of 340 publications were submitted to the main conference, which was held at the Beaulieu convention center during September 8-10, 2021. Based on the reviews, our Program Committee chairs accepted 40 papers for oral presentation and 142 papers for poster presentation. In addition, nine articles accepted for the ICDAR-IJDAR journal track were presented orally at the conference and a workshop was integrated in a poster session. Furthermore, 12 workshops, 2 tutorials, and the doctoral consortium were held during the pre-conference at EPFL during September 5-7, 2021, focusing on specific aspects of document analysis and recognition, such as graphics recognition, camera-based document analysis, and historical documents.<\/p>\n\n\n\n<p>The conference would not have been possible without hundreds of hours of work done by volunteers in the organizing committee. First of all we would like to express our deepest gratitude to our Program Committee chairs, Joseph Llad\u00f3s, Dan Lopresti, and Seiichi Uchida, who oversaw a comprehensive reviewing process and designed the intriguing technical program of the main conference. We are also very grateful for all the hours invested by the members of the Program Committee to deliver high-quality peer reviews. Furthermore, we would like to highlight the excellent contribution by our publication chairs, Liangrui Peng, Fouad Slimane, and Oussama Zayene, who negotiated a great online visibility of the conference proceedings with Springer and ensured flawless camera-ready versions of all publications. Many thanks also to our chairs and organizers of the workshops, competitions, tutorials, and the doctoral consortium for setting up such an inspiring environment around the main conference. Finally, we are thankful for the support we have received from the sponsorship chairs, from our valued sponsors, and from our local organization chairs, which enabled us to put in the extra effort required for a hybrid conference setup.<\/p>\n\n\n\n<p>Our main motivation for organizing ICDAR 2021 was to give practitioners in the DAR community a chance to showcase their research, both at this conference and its satellite events. Thank you to all the authors for submitting and presenting your outstanding work. We sincerely hope that you enjoyed the conference and the exchange with your colleagues, be it on-site or online.<\/p>\n\n\n\n<p><strong>Andreas Fischer, Rolf Ingold, and Marcus Liwicki<\/strong><br><strong>ICDAR 2021 General Chairs<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"3\">3) Call for Bids for ICDAR 2025<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">Deadline: February 28, 2022<\/pre>\n\n\n\n<p><span style=\"text-decoration: underline;\">Submission Method<\/span>: email to faisal.shafait@seecs.edu.pk \/ jean-christophe.burie@univ-lr.fr<\/p>\n\n\n\n<p>ICDAR is the flagship event of TC10\/11 which has been held bi-annually since its inception in 1991.<br>The aim of ICDAR is to bring together international experts to share their experiences and to<br>promote research and development in all areas of Document Analysis and Recognition. Since ICDAR<br>will be organized as an annual event starting from 2024, the ICDAR Advisory Board is seeking<br>proposals to host the 19<sup>th<\/sup> International Conference on Document Analysis and Recognition, to be<br>held in 2025 (ICDAR2025).<\/p>\n\n\n\n<p>A link to the most current version of the guidelines appears below. Please check on the website of<br>TC11 for the latest version.<\/p>\n\n\n\n<p><a href=\"http:\/\/www.iapr-tc11.org\/mediawiki\/images\/ICDAR_Guidelines_2016_02_27.pdf\">http:\/\/www.iapr-tc11.org\/mediawiki\/images\/ICDAR_Guidelines_2016_02_27.pdf<\/a><br><\/p>\n\n\n\n<p>The submission of a bid implies full agreement with the rules and procedures outlined in that<br>document.<\/p>\n\n\n\n<p>The submitted proposal must define clearly the items specified in the guidelines (Section 5.2).<\/p>\n\n\n\n<p>It has been the tradition that the location of ICDAR conferences follows a rotating schedule among<br>different continents. Hence, proposals from Asia are strongly encouraged. However, high quality<br>bids from other locations, for example, from countries where we have had no ICDAR before, will also<br>be considered. Proposals will be examined by the ICDAR Advisory Board.<\/p>\n\n\n\n<p>Proposals should be emailed to Dr Faisal Shafait at faisal.shafait@seecs.edu.pk and Dr Jean-<br>Christophe Burie at jean-christophe.burie@univ-lr.fr by February 28, 2022.<\/p>\n\n\n\n<p>ICDAR Advisory Board,<\/p>\n\n\n\n<p>Faisal Shafait (Chair, TC11)<br>Jean-Christophe Burie (Chair, TC10)<br>Elisa Barney Smith (Chair, IAPR C&amp;M)<br>Koichi Kise<br>C. V. Jawahar<br>Dimosthenis Karatzas<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"4\">4) Online Course on \u201cLiterate Models for Vision\u201d<\/h2>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right is-stacked-on-mobile\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"397\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-1024x397.jpg\" alt=\"\" class=\"wp-image-1425 size-full\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-1024x397.jpg 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-300x116.jpg 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-768x298.jpg 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-1536x596.jpg 1536w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate-1320x512.jpg 1320w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/12\/photo-literate.jpg 1770w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>WHEN: Monday 20 December 2021 from 10:00 to 17.00 CET<\/p>\n\n\n\n<p>WHERE: Online<\/p>\n\n\n\n<p>HOW TO REGISTER: <a href=\"https:\/\/www.i-aida.org\/course\/vision-and-language-reading-systems-and-multi-modal-representations\/\">https:\/\/www.i-aida.org\/course\/vision-and-language-reading-systems-and-multi-modal-representations\/<\/a><\/p>\n<\/div><\/div>\n\n\n\n<p>The Computer Vision Center (CVC) organizes a short interactive course on \u201cLiterate Models for Vision\u201d offered through the International Artificial Intelligence Doctoral Academy (AIDA). Participants will have a chance to reconcile with the state of the art in reading systems, especially scene text recognition, and explore how image text enables us to tackle new and exciting computer vision tasks such as fine-grained image classification, cross-modal retrieval, captioning and visual question answering.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"284\" height=\"190\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image.png\" alt=\"\" class=\"wp-image-1372\"\/><\/figure><\/div>\n\n\n\n<h2 id=\"5\">5) Call For Papers: <strong>Document Analysis System<\/strong> (DAS 2022) &#8211; repost<\/h2>\n\n\n\n<p><strong>DAS 2022 &#8211; Document Analysis System<\/strong><em><br>La Rochelle &#8211; France<br>22-25 May 2022<br><a href=\"https:\/\/das2022.univ-lr.fr\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/das2022.univ-lr.fr<\/a><\/em><\/p>\n\n\n\n<p>DAS 2022 will be located in the La Rochelle University, situated in a beautiful historical city on the French west coast, near the touristic harbour and close to the old city center. The city is easily accessible by train, plane, car and bicycle.<\/p>\n\n\n\n<p>DAS 2022 program will follow the traditional way, with single-track technical sessions containing contributed papers, invited talks, awards, and tutorials as with previous DAS organizations. Poster Sessions will be organized with Poster Lightning Talks in order to give Poster presenters the possibility to point out the highlights of their work to a broad audience. Special focus will be given to the Discussion Groups, which highlight and maintain the workshop character of DAS, and to discussions with industrial partners to rise current trends and challenges.<\/p>\n\n\n\n<p>DAS 2022 will accept both <strong>full papers<\/strong> (up to 15 pages, presented orally or by poster) and <strong>short papers<\/strong> (up to 4 pages, presented as posters or demonstrations). All paper submissions will undergo a rigorous review process that will consider the originality, quality of work, and presentation of ideas, and relevance to document analysis system research. Springer will publish accepted full papers as part of the workshop\u2019s LNCS proceedings, while short papers will be published separately in a companion booklet.<\/p>\n\n\n\n<p>See more on the <a rel=\"noreferrer noopener\" href=\"https:\/\/das2022.univ-lr.fr\" target=\"_blank\">website<\/a>.<\/p>\n\n\n\n<h3>IMPORTANT DATES<\/h3>\n\n\n\n<p>Paper submission deadline: Jan. 4, 2022<br>Notification: March 8, 2022<br>Camera ready: April 1, 2022<br>Conference: May. 22-25, 2022<\/p>\n\n\n\n<h3>SUBMISSION TYPES<\/h3>\n\n\n\n<p>DAS 2022 submissions should be in Springer LNCS format, full and short paper submissions will be accepted, as described below.<br>Submission link: <a href=\"https:\/\/easychair.org\/conferences\/?conf=das2022\">https:\/\/easychair.org\/conferences\/?conf=das2022<\/a><\/p>\n\n\n\n<p><strong>Full papers<\/strong><\/p>\n\n\n\n<p>Full papers should describe complete works of original research. Authors are invited to submit original unpublished research papers, up to 15 pages length, that are not being considered in another forum. This restriction does not apply to unpublished technical reports or papers included in self-archive repositories (departmental, arXiv.org, etc.) that are not peer-reviewed.<\/p>\n\n\n\n<p><strong>Short papers<\/strong><\/p>\n\n\n\n<p>Short papers provide an opportunity to report on research in progress, to present demos and novel positions on document analysis systems. Authors may submit short papers (up to 4 pages in length). Short papers will also undergo review and will appear in an extra booklet, not in the official DAS2022 proceedings.<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"140\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2-1024x140.png\" alt=\"\" class=\"wp-image-1380\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2-1024x140.png 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2-300x41.png 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2-768x105.png 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2-1320x180.png 1320w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/09\/image-2.png 1387w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"6\">6) Call for papers, workshops, tutorials, challenges: 26th International Conference on Pattern Recognition (ICPR 2022)<\/h2>\n\n\n\n<p>This section contains 4 call for paper\/proposal section corresponding to ICPR 2022 main conference, workshops, tutorials and challenges.<\/p>\n\n\n\n<h2>Main conference<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">August 21-25, 2022\nMontreal, Canada\nWebsite: <a rel=\"noreferrer noopener\" href=\"https:\/\/www.icpr2022.com\" target=\"_blank\">https:\/\/www.icpr2022.com<\/a>\nPDF version of this call: <a href=\"https:\/\/www.icpr2022.com\/wp-content\/uploads\/2021\/08\/2021-07-14-ICPR-2022-Call-for-papers-2.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.icpr2022.com\/wp-content\/uploads\/2021\/08\/2021-07-14-ICPR-2022-Call-for-papers-2.pdf<\/a><\/pre>\n\n\n\n<p>ICPR 2022 is the premier world conference in Pattern Recognition. It covers both theoretical issues and applications of the discipline. We solicit original research for publication in the main conference. Topics of interest include all aspects of Pattern Recognition.<\/p>\n\n\n\n<h3>Important Dates<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\">Jan 10 Paper registration deadline<br>Jan 17 Paper submission deadline<br>Mar 14 Acceptance\/Rejection\/Revision decision <br>Apr 11 Revision\/rebuttal deadline<br>May 09 Final decision on submissions <br>Jun 06 Camera ready manuscript deadline <br>Jun 06 Early bird registration deadline<\/pre>\n\n\n\n<p>ICPR 2022 will employ a two-round review process. Papers must be registered prior to submission and all submissions take place through the PaperCept Conference Management Submission.<\/p>\n\n\n\n<p>Papers submitted by the paper deadline will be reviewed using single-blind peer review. Authors are required to include their names and affiliations in their paper as illustrated in the sample paper templates. Submissions must identify the preferred track among the six conference tracks.<\/p>\n\n\n\n<p>The result of the first review round will either be accept (possibly with recommended changes), reject, or revise to resubmit for a second review round. Accepted papers will be published by IEEE and be available in IEEE Xplore. Submissions must be limited to six pages plus additional pages for references.<\/p>\n\n\n\n<p>For the full statement of IAPR ethical requirements for authors, see the<br>webpage <a href=\"https:\/\/iapr.org\/constitution\/soe.php\">https:\/\/iapr.org\/constitution\/soe.php<\/a>.<\/p>\n\n\n\n<p>More info: <a rel=\"noreferrer noopener\" href=\"https:\/\/www.icpr2022.com\" target=\"_blank\">https:\/\/www.icpr2022.com<\/a><\/p>\n\n\n\n<p>See <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/3mnNP6-2021-07-14-ICPR-2022-Call-for-papers-2.pdf\"><strong>PDF call for papers<\/strong>.<\/a><\/p>\n\n\n\n<h2>Workshops<\/h2>\n\n\n\n<p>The ICPR 2022 Workshop Chairs invite proposals for the 26th International Conference on Pattern Recognition which is to be held in Montr\u00e9al, Qu\u00e9bec (QC), Canada during August 21-25, 2022. Workshops can be half- or full-day, and it is also possible to hold workshops that will be operated in a virtual format but it is expected that most workshops will take place at the same venue as the main conference.<br>We seek Workshops on timely topics and applications of Computer Vision, Image and Sound Analysis, Pattern Recognition and Artificial Intelligence. Workshops are expected to provide a forum for the active exchange of ideas and experiences. Members from all segments of the ICPR community are invited to submit workshop proposals for review. Each proposal will be assessed for its scientific content, proposed structure and overall relevance. Workshop organizers will be responsible for inviting speakers and ensuring their participation, submission and review of papers, and structuring leading discussion sessions.<\/p>\n\n\n\n<h3>Guideline for submitting proposals<\/h3>\n\n\n\n<p>The workshop proposal should be submitted via email to the ICPR 2022 Workshop<br>Chairs at workshops@icpr2022.com by January 17th, 2022 (11:59PM Pacific Time).<br>You will receive an acknowledgement of receipt by email within a few working days.<\/p>\n\n\n\n<h3>Important Dates<\/h3>\n\n\n\n<ul><li>Workshop proposals due January: 17, 2022<\/li><li>Workshop proposal decisions: February 14, 2022<\/li><li>Recommended workshop paper deadline: June 6, 2022<\/li><li>Early bird registration deadline: June 6, 2022<\/li><li>Conference: August 21-25, 2022<\/li><li>Tutorials\/Workshops: August 21, 2022<\/li><\/ul>\n\n\n\n<h3>Contacts<\/h3>\n\n\n\n<p>ICPR 2022 Workshop Co-Chairs:<br>\u25cf Jonathan Wu (Canada) &#8211; jwu@uwindsor.ca<br>\u25cf Laurence Likforman (France) &#8211; likforman@telecom-paristech.fr<br>\u25cf Giovanni Maria Farinella (Italy) &#8211; gfarinella@dmi.unict.it<br>\u25cf Xiang Bai (China) &#8211; xbai@hust.edu.cn<\/p>\n\n\n\n<p>More details on <strong><a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/2021-07-14-ICPR-2022-Call-for-workshops.pdf\">PDF call for workshops<\/a><\/strong>.<\/p>\n\n\n\n<h2>Tutorials<\/h2>\n\n\n\n<p>The ICPR 2022 Organizing Committee invites proposals for tutorials in conjunction with the 26 th International Conference on Pattern Recognition, which is to be held at Montr\u00e9al, Qu\u00e9bec (QC), Canada during August 21-25, 2022. We seek tutorials on core techniques, application areas and emerging research topics that are of interest within<br>the ICPR community. An effective and informative tutorial should provide a broad introduction to the chosen research area as well as in-depth coverage on selected advanced topics. Proposals that focus exclusively on the presenters\u2019 own work or commercial presentations are not acceptable.<\/p>\n\n\n\n<h3>Guidelines for submitting proposals<\/h3>\n\n\n\n<p>To propose a tutorial, a PDF file containing the information outlined below must be<br>submitted by email to tutorials@icpr2022.com.<\/p>\n\n\n\n<h3>Important dates<\/h3>\n\n\n\n<p>Submission of proposals: March 14, 2022 [11:59 p.m. Central European Time ]<br>Notification of acceptance: April 11, 2022.<br>Early bird registration deadline: June 6, 2022<br>Conference: August 21-25, 2022<br>Tutorials\/Workshops: August 21, 2022<\/p>\n\n\n\n<p>More details on <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/2021-07-14-ICPR-2022-Call-for-tutorials.pdf\"><strong>PDF call for tutorials<\/strong>.<\/a><\/p>\n\n\n\n<h2>Challenges<\/h2>\n\n\n\n<p>The ICPR 2022 Challenges Co-Chairs invites proposals for challenges to be held within the framework of the 26 International Conference on Pattern Recognition, which is to be held at Montr\u00e9al, Qu\u00e9bec (QC), Canada during August 21-25, 2022. The aim of the challenges is to advance algorithm and method development in Pattern Recognition by objective evaluation on common datasets. The challenge organizers are responsible for providing good quality data and defining objective evaluation criteria that are applied to the results of submitted algorithms.<\/p>\n\n\n\n<h3>Submission<\/h3>\n\n\n\n<p>Proposals should be submitted by electronic mail to the ICPR Challenges Chairs:<br>Dimosthenis Karatzas (dimos@cvc.uab.es)<br>Marco Bertini (marco.bertini@unifi.it)<br><\/p>\n\n\n\n<h3>Important Dates<\/h3>\n\n\n\n<p>December 20, 2021: Submission of proposals for challenges.<br>January 10, 2022: Notification of acceptance.<br>May 20, 2022: Challenge report due<br>June 6, 2022: Camera ready report submission<br>June 6, 2022: Early bird conference registration deadline<br>August 21, 2022: Challenge presentation date<\/p>\n\n\n\n<p>If you have any questions, please contact the ICPR2022 Challenges Chairs.<\/p>\n\n\n\n<p>More details on <strong><a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/11\/ICPR-2022-Call-for-Challenges.pdf\">PDF call for challenges<\/a><\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"7\">7) <strong>Call for Papers ICPRAI 2022<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Endorsed by IAPR<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Deadline for Paper submission: 15\/12\/2021<\/pre>\n\n\n\n<p>Printed in LNCS proceedings volume<\/p>\n\n\n\n<p>You are invited to submit a paper for the third International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI 2022). It will be held in PARIS 1<sup>st<\/sup> to 3<sup>rd<\/sup> June 2022<\/p>\n\n\n\n<p>See at <a href=\"https:\/\/icprai2022.sciencesconf.org\">https:\/\/icprai2022.sciencesconf.org<\/a><\/p>\n\n\n\n<p>At the moment in Paris (France), we cannot predict what will be&nbsp;sanitary situation next June.&nbsp;In this context, we cannot schedule for sure, how the conference will take place. We hope most of you can travel for an on site conference, but an hybrid version is also possible.<\/p>\n\n\n\n<p><strong>Scope of the Conference<\/strong><\/p>\n\n\n\n<p>The conference aims to bring together researchers, students and practitioners of pattern recognition and artificial intelligence, to present and discuss new advances.<\/p>\n\n\n\n<p><strong>Pattern recognition<\/strong> : recognition of different types of patterns, feature extraction \/ selection and evaluation, structural \/ statistical approaches<\/p>\n\n\n\n<p><strong>Computer vision<\/strong> : image processing \/ analysis, segmentation, object recognition, scene understanding<\/p>\n\n\n\n<p><strong>Artificial intelligence<\/strong> : machine \/ deep learning, expert systems, system interpretability, knowledge representation, perception, semantic analysis, intelligent systems<\/p>\n\n\n\n<p><strong>Big data<\/strong> : data visualization, volume \/ velocity \/ data variety, small sample size, supercomputing, cloud, data mining and performance evaluation<\/p>\n\n\n\n<p><strong>With applications <\/strong>related to: handwriting, document, text, language processing, e-learning, image processing \/ analysis, bio-medical imaging, remote sensing, image retrieval, 2D \/ 3D images and graphics, audio \/ video, multimedia applications, security and forensic studies, mobile applications, face, fingerprint, iris, brain, strategic objects and targets, industrial applications of PRAI, innovation and technology transfer, financial trends and analysis, traffic analysis and smart transportation systems, robotics and autonomous vehicles &#8230;<\/p>\n\n\n\n<p>With 5 <strong>special sessions<\/strong><\/p>\n\n\n\n<ul><li>Medical Applications of Pattern Recognition and AI<\/li><li>Analysis and learning of multi-variate, multi-temporal, multi-resolution and multi-source remote sensing data<\/li><li>Graphs for Pattern Recognition: Representations, Theory and Applications<\/li><li>Time series analysis<\/li><li><a href=\"https:\/\/bgmv-xai.labri.fr\/\">Vis&amp;ML for XAI:&nbsp;Bridging the gap between ML and visualization communities for eXplainable Artificial Intelligence<\/a>&nbsp;<\/li><\/ul>\n\n\n\n<p>and<\/p>\n\n\n\n<p><strong>3 keynotes<\/strong><\/p>\n\n\n\n<p>A <strong>Special Issue<\/strong> is scheduled for the best papers in IJPRAI journal<\/p>\n\n\n\n<p>As well as a <strong>Special Section<\/strong> of the&nbsp;<a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\">Pattern Recognition Letters<\/a>&nbsp;(Elsevier) journal.<\/p>\n\n\n\n<p><strong>Proposed by:<\/strong><\/p>\n\n\n\n<p>Honorary Chair &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Ching Y. Suen (Canada)<\/p>\n\n\n\n<p>General chair&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Nicole Vincent (France)&nbsp;<\/p>\n\n\n\n<p>Conference Co-Chairs &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Edwin Hancock (UK)<\/p>\n\n\n\n<p>Yuan Y. Tang (China)<\/p>\n\n\n\n<p>Program Chairs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Mounim El Yacoubi (France)<\/p>\n\n\n\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Umapada Pal (India)<\/p>\n\n\n\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Eric Granger (Canada)<\/p>\n\n\n\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pong C. Yuen (China)<\/p>\n\n\n\n<p><strong>Key dates<\/strong><\/p>\n\n\n\n<p>Deadline for Paper submission: 15\/12\/2021<\/p>\n\n\n\n<p>Author notification: 8\/03\/2022<\/p>\n\n\n\n<p><strong>Submissions<\/strong><\/p>\n\n\n\n<p>The conference solicits papers covering any of these topics. Papers will be 12 pages of content in the Springer LNCS style and should report on novel, unpublished work.<\/p>\n\n\n\n<p>The proceedings of the conference will be published as a Lecture Notes in Computer Science (LNCS) proceedings volume.<\/p>\n\n\n\n<p><strong>Contact<\/strong><\/p>\n\n\n\n<p><a href=\"mailto:icprai2022@sciencesconf.org\">icprai2022@sciencesconf.org<\/a><\/p>\n\n\n\n<p>Sponsors are : <a href=\"https:\/\/imds-world.com\/en\/\">IMDS<\/a> , <a href=\"https:\/\/www.idemia.com\/\">IDEMIA<\/a> , <a href=\"https:\/\/w3.mi.parisdescartes.fr\/sip-lab\/\">LIPADE<\/a> , <a href=\"https:\/\/u-paris.fr\/en\/faculty-of-sciences\/\">Universit\u00e9 de Paris facult\u00e9 des Sciences<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"8\">8) IJDAR article alert (vol. 24)<\/h2>\n\n\n\n<p><strong>Volume 24, Issue 3, September 2021<\/strong><br><a rel=\"noreferrer noopener\" href=\"https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-1\" target=\"_blank\">https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-3<\/a><\/p>\n\n\n\n<p>Titles of the 10 articles:<\/p>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00385-1\">Editorial for special issue on \u201cAdvanced Topics in Document Analysis and Recognition\u201d<\/a><br>Josep Llad\u00f3s, Daniel Lopresti &amp; Seiichi Uchida<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00375-3\">Learning from similarity and information extraction from structured documents<\/a><br>Martin Hole\u010dek<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00371-7\">Learning-free pattern detection for manuscript research<\/a><br>Hussein Mohammed, Volker M\u00e4rgner &amp; Giovanni Ciotti<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00379-z\">Revealing a history: palimpsest text separation with generative networks<\/a><br>Anna Starynska, David Messinger &amp; Yu Kong<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00377-1\">A two-step framework for text line segmentation in historical Arabic and Latin document images<\/a><br>Olfa Mechi, Maroua Mehri, Rolf Ingold &amp; Najoua Essoukri Ben Amara<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00369-1\">Self-supervised deep metric learning for ancient papyrus fragments retrieval<\/a><br>Antoine Pirrone, Marie Beurton-Aimar &amp; Nicholas Journet<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00383-3\">Asking questions on handwritten document collections<\/a><br>Minesh Mathew, Lluis Gomez, Dimosthenis Karatzas &amp; C. V. Jawahar<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00378-0\">EAML: ensemble self-attention-based mutual learning network for document image classification<\/a><br>Souhail Bakkali, Zuheng Ming, Micka\u00ebl Coustaty &amp; Mar\u00e7al Rusi\u00f1ol&nbsp;<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00380-6\">Beyond document object detection: instance-level segmentation of complex layouts<\/a><br>Sanket Biswas, Pau Riba, Josep Llad\u00f3s &amp; Umapada Pal<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00376-2\">Data Augmentation using Geometric, Frequency, and Beta Modeling approaches for Improving Multi-lingual Online Handwriting Recognition<\/a><br>Yahia Hamdi, Houcine Boubaker &amp; Adel M. Alimi<\/li><\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Volume 24, Issue 4, December 2021<\/strong><br><a href=\"https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-4\">https:\/\/link.springer.com\/journal\/10032\/volumes-and-issues\/24-4<\/a><\/p>\n\n\n\n<p>Titles of the 5 articles:<\/p>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00370-8\">Segmentation of text lines using multi-scale CNN from warped printed and handwritten document images<\/a><br>Arpita Dutta, Arpan Garai, Samit Biswas &amp; Amit Kumar Das<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00373-5\">TextPolar: irregular scene text detection using polar representation<\/a><br>Jie Chen &amp; Zhouhui Lian<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00374-4\">SKFont: skeleton-driven Korean font generator with conditional deep adversarial networks<\/a><br>Debbie Honghee Ko, Ammar Ul Hassan, Jungjae Suk &amp; Jaeyoung Choi<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00381-5\">A hybrid approach to recognize generic sections in scholarly documents<\/a><br>Shoubin LiQing Wang<\/li><\/ul>\n\n\n\n<ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10032-021-00382-4\">Extracting text from scanned Arabic books: a large-scale benchmark dataset and a fine-tuned Faster-R-CNN model<\/a><br> Randa Elanwar, Wenda Qin, Margrit Betke &amp; Derry Wijaya<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"9\">9) Job offers &#8211; repost<\/h2>\n\n\n\n<h3 id=\"toc_13\">Research Engineer\/PostDoc Position (2.5 Years) &#8211; IRISA\/INSA Rennes (France)<\/h3>\n\n\n\n<h4>Title: Combining Deep and Syntactical Models for a Self-adaptive Optical Music Recognition System applied on Historical Orchestra Scores<\/h4>\n\n\n\n<p><strong class=\"\">Pdf version<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www-intuidoc.irisa.fr\/files\/2021\/10\/SujetInge_Collabscore.pdf\">https:\/\/www-intuidoc.irisa.fr\/files\/2021\/10\/SujetInge_Collabscore.pdf<\/a><\/p>\n\n\n\n<p><strong class=\"\">Important Dates<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>December 1, 2021 (or later) - July 31, 2024  Contract period<\/code><\/pre>\n\n\n\n<p><strong class=\"\">IRISA &#8211; Intuidoc<\/strong><\/p>\n\n\n\n<p>IRISA is a joint research center for Informatics, including Robotics and Image and Signal Processing. 850 people, 40 teams, explore the world of digital sciences to find applications in healthcare, ecology-environment, cyber-security, transportation, multimedia, and industry. INSA Rennes is one of the 8 trustees of IRISA.<\/p>\n\n\n\n<p>The Intuidoc team (<a href=\"https:\/\/www.irisa.fr\/intuidoc\" class=\"\">https:\/\/www.irisa.fr\/intuidoc<\/a>) conducts research on the topic of document image recognition. Since many years, the team proposes a system, called DMOS-PI method, for document structure analysis of documents. This DMOS-PI method is used for document recognition, or field extraction in archive documents, handwritten contents damaged documents (musical scores, archives, newspapers, letters, electronic schema, etc.).<\/p>\n\n\n\n<p><strong class=\"\">Collabscore project<\/strong><\/p>\n\n\n\n<p>Collabscore is a project founded by ANR (French Research National Agency), led by the CNAM. The goal is to&nbsp;study ancient scores provided by the BNF (Biblioth\u00e8que National de France) and Royaumont foundation.&nbsp;Collabscore is a multidisciplinary project. The first task consists in improving OMR (Optical Music Recognition)&nbsp;results using learning techniques. The second action will focus on methods for automatic alignment of the scored&nbsp;score with other multimodal sources. The last one will set up demonstrators based on notated scores at two of the&nbsp;project partners, representative, in various ways, of institutions in charge of musical heritage collections (BnF and&nbsp;Fondation Royaumont). Intuidoc team focuses on the first task of musical score recognition.<\/p>\n\n\n\n<p><strong class=\"\">Position to be filled<\/strong><\/p>\n\n\n\n<ul><li>Position: Post-doctoral fellow \/ Research Engineer<\/li><li>Time commitment: Full-time<\/li><li>Duration of the contract: up to 32 months, starting as soon a possible<\/li><li>Supervisors: Bertrand Co\u00fcasnon, Aur\u00e9lie Lemaitre, Yann Soullard<\/li><li>Indicative salary: Up to \u20ac36 000 gross annual salary (according to experience), with social security benefits<\/li><li>Location: IRISA &#8212; Rennes, France<\/li><\/ul>\n\n\n\n<p><strong class=\"\">Missions<\/strong><\/p>\n\n\n\n<p>The post-doctoral\/engineer fellow will work on the conception of a OMR system. Based on previous works of our research team, the goal of this position is to enrich an existing system (DMOS-PI) to get a&nbsp;complete self-adaptive OMR system for historical orchestra scores. The tasks are mainly:<\/p>\n\n\n\n<ul><li class=\"\">define a grammatical description of musical notation, using the existing DMOS-PI method;<\/li><li class=\"\">generate unsupervised data for training musical symbols recognizers, using the Isolating-GAN,&nbsp;a novel unsupervised music symbol detection method based on Generative Adversarial Network (GAN);<\/li><li class=\"\">create a gradual mechanism for adapting the system to new partitions to build a self-adaptive system with few annotated data;<\/li><li class=\"\">integrate anomaly detection into the system.<\/li><\/ul>\n\n\n\n<p>Logical programming from grammars and languages is expected in this work. Machine Learning methods, especially Deep&nbsp;learning-based approaches (GAN, RCNN, SSD&#8230;), will be used to solve some of the tasks, as done in our previous works on music symbol detection.<\/p>\n\n\n\n<p><strong class=\"\">Applicant Requirements<\/strong><\/p>\n\n\n\n<ul><li class=\"\">PhD, Master degree or Engineering degree in computer science<\/li><li class=\"\">Experience in document recognition or statistical analysis.<\/li><li class=\"\">Skills in grammars and languages and\/or logical programming are nice-to-have,&nbsp;as well as knowledge of music&nbsp;notation.<\/li><li class=\"\">Knowledge in deep learning with an experience with at least one library dedicated to deep learning (Keras,&nbsp;Tensorflow, Pytorch) are expected.<\/li><\/ul>\n\n\n\n<p>Candidates should contact via email: Bertrand Co\u00fcasnon (<a class=\"\" href=\"mailto:bertrand.couasnon@irisa.fr\">bertrand.couasnon@irisa.fr<\/a>), Aur\u00e9lie Lemaitre (<a class=\"\" href=\"mailto:aurelie.lemaitre@irisa.fr\">aurelie.lemaitre@irisa.fr<\/a>) and Yann Soullard (<a class=\"\" href=\"mailto:bertrand.couasnon@irisa.fr\">yann.soullard@irisa.fr<\/a>).<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>Post-doctoral research position &#8211; L3i &#8211; La Rochelle, France<\/strong><\/h3>\n\n\n\n<p><strong>Title : Extraction of graphic elements in comics books for emotion recognition<\/strong><\/p>\n\n\n\n<p>The L3i laboratory has one open post-doc position in computer science, in the specific field of document image analysis and pattern recognition<\/p>\n\n\n\n<p><strong>Duration<\/strong>: 12 months (an extension of 12 months will be possible)<br><strong>Position available from<\/strong>: As soon as possible, 2021<br><strong>Salary<\/strong>: approximately 2100 \u20ac \/ month (net)<br><strong>Place<\/strong>: L3i lab, University of La Rochelle, France<br><strong>Specialty<\/strong>: Computer Science\/ Image Processing\/ Document Analysis\/ Pattern Recognition<br><strong>Contact<\/strong>: Jean-Christophe BURIE (jcburie [at] univ-lr.fr)<\/p>\n\n\n\n<p><strong>Position Description<\/strong><\/p>\n\n\n\n<p>The L3i is a research lab of the University of La Rochelle. La Rochelle is a city in the south west of France on the Atlantic coast and is one of the most attractive and dynamic cities in France. The L3i works since several years on document analysis and has developed a well-known expertise in &#8220;Bande dessin\u00e9e&#8221;, manga and comics analysis, indexing and understanding.<\/p>\n\n\n\n<p>The work done by the post-doc will take part in the context of <strong>SAiL<\/strong> (Sequential Art Image Laboratory) a joint laboratory involving L3i and a private company. The objective is to create innovative tools to index and interact with digitized comics. The work will be done in a team of 10 researchers and engineers.<\/p>\n\n\n\n<p>The work will consist in developing original approaches for extracting and recognizing graphics elements in comic panels in order to recognize emotions. Authors usually used different strategies for representing emotions such as shape of speech balloon, specific symbols, colour of the faces, etc. These elements are drawn among the other graphic elements (main characters, scenery, \u2026) making the localisation and the extraction challenging. In order to extract these specific elements, the development of original approaches will be necessary. Deep learning-based strategies can be explored to reach this goal. This work will be done in collaboration with other researchers working on text understanding.<\/p>\n\n\n\n<p><strong>Qualifications<\/strong><\/p>\n\n\n\n<p>Candidates must have a completed PhD and a research experience <strong>in image processing and analysis<\/strong>, <strong>pattern recognition<\/strong>. Some knowledge and experience in deep learning are also recommended.<\/p>\n\n\n\n<p><strong>General Qualifications<\/strong><\/p>\n\n\n\n<p>\u2022 Good programming skills mastering at least one programming language like Python, Java, C\/C++<br>\u2022 Good teamwork skills<br>\u2022 Good writing skills and proficiency in written and spoken English or French<\/p>\n\n\n\n<p><strong>Applications<\/strong><\/p>\n\n\n\n<p>Candidates should send a CV and a motivation letter to jcburie [at] univ-lr.fr.<\/p>\n\n\n\n<p><a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2021\/06\/2021_PostDoc_Extraction-of-graphic-elements-in-comics-books-for-emotion-recognition.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Download PDF<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the December 2021 edition of the TC10 newsletter. In this issue, you will find the foreword of ICDAR 2021 General Chairs and registered participants can still access the&nbsp;digital [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1413"}],"collection":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1413"}],"version-history":[{"count":12,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1413\/revisions"}],"predecessor-version":[{"id":1435,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/1413\/revisions\/1435"}],"wp:attachment":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1413"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1413"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1413"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}